00:00:00.001 Started by upstream project "autotest-per-patch" build number 132754 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.161 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.161 The recommended git tool is: git 00:00:00.162 using credential 00000000-0000-0000-0000-000000000002 00:00:00.163 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.229 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.288 Using shallow fetch with depth 1 00:00:00.288 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.288 > git --version # timeout=10 00:00:00.330 > git --version # 'git version 2.39.2' 00:00:00.330 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.357 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.357 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.491 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.502 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.515 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.515 > git config core.sparsecheckout # timeout=10 00:00:04.527 > git read-tree -mu HEAD # timeout=10 00:00:04.543 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.565 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.565 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.681 [Pipeline] Start of Pipeline 00:00:04.691 [Pipeline] library 00:00:04.692 Loading library shm_lib@master 00:00:04.692 Library shm_lib@master is cached. Copying from home. 00:00:04.726 [Pipeline] node 00:00:04.734 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.736 [Pipeline] { 00:00:04.746 [Pipeline] catchError 00:00:04.747 [Pipeline] { 00:00:04.759 [Pipeline] wrap 00:00:04.766 [Pipeline] { 00:00:04.773 [Pipeline] stage 00:00:04.774 [Pipeline] { (Prologue) 00:00:04.980 [Pipeline] sh 00:00:05.266 + logger -p user.info -t JENKINS-CI 00:00:05.280 [Pipeline] echo 00:00:05.281 Node: GP11 00:00:05.287 [Pipeline] sh 00:00:05.588 [Pipeline] setCustomBuildProperty 00:00:05.599 [Pipeline] echo 00:00:05.601 Cleanup processes 00:00:05.606 [Pipeline] sh 00:00:05.891 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.891 917131 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.904 [Pipeline] sh 00:00:06.188 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.188 ++ grep -v 'sudo pgrep' 00:00:06.188 ++ awk '{print $1}' 00:00:06.188 + sudo kill -9 00:00:06.188 + true 00:00:06.201 [Pipeline] cleanWs 00:00:06.210 [WS-CLEANUP] Deleting project workspace... 00:00:06.210 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.215 [WS-CLEANUP] done 00:00:06.220 [Pipeline] setCustomBuildProperty 00:00:06.232 [Pipeline] sh 00:00:06.516 + sudo git config --global --replace-all safe.directory '*' 00:00:06.587 [Pipeline] httpRequest 00:00:06.944 [Pipeline] echo 00:00:06.947 Sorcerer 10.211.164.101 is alive 00:00:06.956 [Pipeline] retry 00:00:06.958 [Pipeline] { 00:00:06.981 [Pipeline] httpRequest 00:00:06.986 HttpMethod: GET 00:00:06.987 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.988 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.001 Response Code: HTTP/1.1 200 OK 00:00:07.001 Success: Status code 200 is in the accepted range: 200,404 00:00:07.002 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.626 [Pipeline] } 00:00:13.645 [Pipeline] // retry 00:00:13.654 [Pipeline] sh 00:00:13.944 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.963 [Pipeline] httpRequest 00:00:14.661 [Pipeline] echo 00:00:14.663 Sorcerer 10.211.164.101 is alive 00:00:14.671 [Pipeline] retry 00:00:14.673 [Pipeline] { 00:00:14.683 [Pipeline] httpRequest 00:00:14.688 HttpMethod: GET 00:00:14.688 URL: http://10.211.164.101/packages/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:00:14.689 Sending request to url: http://10.211.164.101/packages/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:00:14.703 Response Code: HTTP/1.1 200 OK 00:00:14.703 Success: Status code 200 is in the accepted range: 200,404 00:00:14.704 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:03:39.357 [Pipeline] } 00:03:39.376 [Pipeline] // retry 00:03:39.383 [Pipeline] sh 00:03:39.671 + tar --no-same-owner -xf spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:03:42.217 [Pipeline] sh 00:03:42.506 + git -C spdk log --oneline -n5 00:03:42.506 1148849d6 nvme/rdma: Register UMR per IO request 00:03:42.506 0787c2b4e accel/mlx5: Support mkey registration 00:03:42.506 0ea9ac02f accel/mlx5: Create pool of UMRs 00:03:42.506 60adca7e1 lib/mlx5: API to configure UMR 00:03:42.506 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:03:42.517 [Pipeline] } 00:03:42.530 [Pipeline] // stage 00:03:42.540 [Pipeline] stage 00:03:42.542 [Pipeline] { (Prepare) 00:03:42.559 [Pipeline] writeFile 00:03:42.574 [Pipeline] sh 00:03:42.859 + logger -p user.info -t JENKINS-CI 00:03:42.871 [Pipeline] sh 00:03:43.158 + logger -p user.info -t JENKINS-CI 00:03:43.173 [Pipeline] sh 00:03:43.462 + cat autorun-spdk.conf 00:03:43.462 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.462 SPDK_TEST_NVMF=1 00:03:43.462 SPDK_TEST_NVME_CLI=1 00:03:43.462 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.462 SPDK_TEST_NVMF_NICS=e810 00:03:43.462 SPDK_TEST_VFIOUSER=1 00:03:43.462 SPDK_RUN_UBSAN=1 00:03:43.462 NET_TYPE=phy 00:03:43.470 RUN_NIGHTLY=0 00:03:43.475 [Pipeline] readFile 00:03:43.500 [Pipeline] withEnv 00:03:43.502 [Pipeline] { 00:03:43.518 [Pipeline] sh 00:03:43.810 + set -ex 00:03:43.810 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:43.810 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:43.810 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.810 ++ SPDK_TEST_NVMF=1 00:03:43.810 ++ SPDK_TEST_NVME_CLI=1 00:03:43.810 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.810 ++ SPDK_TEST_NVMF_NICS=e810 00:03:43.810 ++ SPDK_TEST_VFIOUSER=1 00:03:43.810 ++ SPDK_RUN_UBSAN=1 00:03:43.810 ++ NET_TYPE=phy 00:03:43.810 ++ RUN_NIGHTLY=0 00:03:43.810 + case $SPDK_TEST_NVMF_NICS in 00:03:43.810 + DRIVERS=ice 00:03:43.810 + [[ tcp == \r\d\m\a ]] 00:03:43.810 + [[ -n ice ]] 00:03:43.810 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:43.810 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:43.810 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:43.810 rmmod: ERROR: Module irdma is not currently loaded 00:03:43.810 rmmod: ERROR: Module i40iw is not currently loaded 00:03:43.810 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:43.810 + true 00:03:43.810 + for D in $DRIVERS 00:03:43.810 + sudo modprobe ice 00:03:43.810 + exit 0 00:03:43.821 [Pipeline] } 00:03:43.836 [Pipeline] // withEnv 00:03:43.841 [Pipeline] } 00:03:43.855 [Pipeline] // stage 00:03:43.866 [Pipeline] catchError 00:03:43.868 [Pipeline] { 00:03:43.882 [Pipeline] timeout 00:03:43.883 Timeout set to expire in 1 hr 0 min 00:03:43.885 [Pipeline] { 00:03:43.899 [Pipeline] stage 00:03:43.901 [Pipeline] { (Tests) 00:03:43.918 [Pipeline] sh 00:03:44.209 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.209 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.209 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.209 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:44.209 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.209 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:44.209 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:44.209 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:44.209 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:44.209 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:44.209 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:44.209 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.209 + source /etc/os-release 00:03:44.209 ++ NAME='Fedora Linux' 00:03:44.209 ++ VERSION='39 (Cloud Edition)' 00:03:44.209 ++ ID=fedora 00:03:44.209 ++ VERSION_ID=39 00:03:44.209 ++ VERSION_CODENAME= 00:03:44.209 ++ PLATFORM_ID=platform:f39 00:03:44.209 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:44.209 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:44.209 ++ LOGO=fedora-logo-icon 00:03:44.209 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:44.209 ++ HOME_URL=https://fedoraproject.org/ 00:03:44.209 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:44.209 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:44.209 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:44.209 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:44.209 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:44.209 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:44.209 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:44.209 ++ SUPPORT_END=2024-11-12 00:03:44.209 ++ VARIANT='Cloud Edition' 00:03:44.209 ++ VARIANT_ID=cloud 00:03:44.209 + uname -a 00:03:44.209 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:44.209 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.149 Hugepages 00:03:45.149 node hugesize free / total 00:03:45.149 node0 1048576kB 0 / 0 00:03:45.149 node0 2048kB 0 / 0 00:03:45.149 node1 1048576kB 0 / 0 00:03:45.149 node1 2048kB 0 / 0 00:03:45.149 00:03:45.149 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.149 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:45.149 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:45.149 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:45.408 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:45.408 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.408 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:45.408 + rm -f /tmp/spdk-ld-path 00:03:45.408 + source autorun-spdk.conf 00:03:45.408 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.408 ++ SPDK_TEST_NVMF=1 00:03:45.408 ++ SPDK_TEST_NVME_CLI=1 00:03:45.408 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.408 ++ SPDK_TEST_NVMF_NICS=e810 00:03:45.408 ++ SPDK_TEST_VFIOUSER=1 00:03:45.408 ++ SPDK_RUN_UBSAN=1 00:03:45.408 ++ NET_TYPE=phy 00:03:45.408 ++ RUN_NIGHTLY=0 00:03:45.408 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:45.408 + [[ -n '' ]] 00:03:45.408 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.408 + for M in /var/spdk/build-*-manifest.txt 00:03:45.408 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:45.408 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.408 + for M in /var/spdk/build-*-manifest.txt 00:03:45.408 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:45.408 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.408 + for M in /var/spdk/build-*-manifest.txt 00:03:45.408 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:45.408 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.408 ++ uname 00:03:45.408 + [[ Linux == \L\i\n\u\x ]] 00:03:45.408 + sudo dmesg -T 00:03:45.408 + sudo dmesg --clear 00:03:45.408 + dmesg_pid=918449 00:03:45.408 + [[ Fedora Linux == FreeBSD ]] 00:03:45.408 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.408 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.408 + sudo dmesg -Tw 00:03:45.408 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:45.408 + [[ -x /usr/src/fio-static/fio ]] 00:03:45.408 + export FIO_BIN=/usr/src/fio-static/fio 00:03:45.408 + FIO_BIN=/usr/src/fio-static/fio 00:03:45.408 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:45.408 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:45.408 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:45.408 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.408 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.408 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:45.408 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.408 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.408 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.408 19:01:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:45.408 19:01:55 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:45.408 19:01:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:45.408 19:01:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:45.408 19:01:55 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.408 19:01:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:45.408 19:01:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:45.408 19:01:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:45.408 19:01:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:45.408 19:01:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.408 19:01:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.408 19:01:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.408 19:01:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.408 19:01:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.408 19:01:55 -- paths/export.sh@5 -- $ export PATH 00:03:45.408 19:01:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.408 19:01:55 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.408 19:01:55 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:45.408 19:01:55 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733508115.XXXXXX 00:03:45.408 19:01:55 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733508115.G6pL5f 00:03:45.408 19:01:55 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:45.408 19:01:55 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:45.408 19:01:55 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:45.408 19:01:55 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:45.409 19:01:55 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:45.409 19:01:55 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:45.409 19:01:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:45.409 19:01:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.667 19:01:55 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:45.667 19:01:55 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:45.667 19:01:55 -- pm/common@17 -- $ local monitor 00:03:45.667 19:01:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.667 19:01:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.667 19:01:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.667 19:01:55 -- pm/common@21 -- $ date +%s 00:03:45.667 19:01:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.667 19:01:55 -- pm/common@21 -- $ date +%s 00:03:45.667 19:01:55 -- pm/common@25 -- $ sleep 1 00:03:45.667 19:01:55 -- pm/common@21 -- $ date +%s 00:03:45.667 19:01:55 -- pm/common@21 -- $ date +%s 00:03:45.667 19:01:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508115 00:03:45.667 19:01:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508115 00:03:45.667 19:01:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508115 00:03:45.667 19:01:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508115 00:03:45.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508115_collect-cpu-temp.pm.log 00:03:45.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508115_collect-cpu-load.pm.log 00:03:45.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508115_collect-vmstat.pm.log 00:03:45.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508115_collect-bmc-pm.bmc.pm.log 00:03:46.605 19:01:56 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:46.605 19:01:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:46.605 19:01:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:46.605 19:01:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.605 19:01:56 -- spdk/autobuild.sh@16 -- $ date -u 00:03:46.605 Fri Dec 6 06:01:56 PM UTC 2024 00:03:46.605 19:01:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:46.605 v25.01-pre-310-g1148849d6 00:03:46.605 19:01:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:46.605 19:01:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:46.605 19:01:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:46.605 19:01:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:46.605 19:01:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:46.605 19:01:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.605 ************************************ 00:03:46.605 START TEST ubsan 00:03:46.605 ************************************ 00:03:46.605 19:01:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:46.605 using ubsan 00:03:46.605 00:03:46.605 real 0m0.000s 00:03:46.605 user 0m0.000s 00:03:46.605 sys 0m0.000s 00:03:46.605 19:01:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.605 19:01:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:46.605 ************************************ 00:03:46.605 END TEST ubsan 00:03:46.605 ************************************ 00:03:46.605 19:01:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:46.605 19:01:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:46.605 19:01:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:46.605 19:01:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:46.605 19:01:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:46.605 19:01:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:46.605 19:01:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:46.605 19:01:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:46.606 19:01:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:46.606 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:46.606 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:46.863 Using 'verbs' RDMA provider 00:03:57.464 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:07.520 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:08.039 Creating mk/config.mk...done. 00:04:08.039 Creating mk/cc.flags.mk...done. 00:04:08.039 Type 'make' to build. 00:04:08.039 19:02:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:08.039 19:02:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:08.039 19:02:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:08.039 19:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.039 ************************************ 00:04:08.039 START TEST make 00:04:08.039 ************************************ 00:04:08.039 19:02:18 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:08.298 make[1]: Nothing to be done for 'all'. 00:04:10.219 The Meson build system 00:04:10.219 Version: 1.5.0 00:04:10.219 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:10.219 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:10.219 Build type: native build 00:04:10.219 Project name: libvfio-user 00:04:10.219 Project version: 0.0.1 00:04:10.219 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:10.219 C linker for the host machine: cc ld.bfd 2.40-14 00:04:10.219 Host machine cpu family: x86_64 00:04:10.219 Host machine cpu: x86_64 00:04:10.219 Run-time dependency threads found: YES 00:04:10.219 Library dl found: YES 00:04:10.219 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:10.219 Run-time dependency json-c found: YES 0.17 00:04:10.219 Run-time dependency cmocka found: YES 1.1.7 00:04:10.219 Program pytest-3 found: NO 00:04:10.219 Program flake8 found: NO 00:04:10.219 Program misspell-fixer found: NO 00:04:10.219 Program restructuredtext-lint found: NO 00:04:10.219 Program valgrind found: YES (/usr/bin/valgrind) 00:04:10.219 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:10.219 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:10.219 Compiler for C supports arguments -Wwrite-strings: YES 00:04:10.219 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:10.219 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:10.219 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:10.219 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:10.219 Build targets in project: 8 00:04:10.219 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:10.219 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:10.219 00:04:10.219 libvfio-user 0.0.1 00:04:10.219 00:04:10.219 User defined options 00:04:10.219 buildtype : debug 00:04:10.219 default_library: shared 00:04:10.219 libdir : /usr/local/lib 00:04:10.219 00:04:10.219 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.796 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:11.061 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:11.061 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:11.061 [3/37] Compiling C object samples/null.p/null.c.o 00:04:11.061 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:11.061 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:11.061 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:11.061 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:11.061 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:11.061 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:11.061 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:11.324 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:11.324 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:11.324 [13/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:11.325 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:11.325 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:11.325 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:11.325 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:11.325 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:11.325 [19/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:11.325 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:11.325 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:11.325 [22/37] Compiling C object samples/client.p/client.c.o 00:04:11.325 [23/37] Compiling C object samples/server.p/server.c.o 00:04:11.325 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:11.325 [25/37] Linking target samples/client 00:04:11.325 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:11.325 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:11.325 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:11.325 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:11.585 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:11.585 [31/37] Linking target test/unit_tests 00:04:11.585 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:11.585 [33/37] Linking target samples/shadow_ioeventfd_server 00:04:11.585 [34/37] Linking target samples/null 00:04:11.585 [35/37] Linking target samples/gpio-pci-idio-16 00:04:11.585 [36/37] Linking target samples/server 00:04:11.585 [37/37] Linking target samples/lspci 00:04:11.585 INFO: autodetecting backend as ninja 00:04:11.585 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.848 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:12.422 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:12.681 ninja: no work to do. 00:04:17.943 The Meson build system 00:04:17.943 Version: 1.5.0 00:04:17.943 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:17.943 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:17.943 Build type: native build 00:04:17.943 Program cat found: YES (/usr/bin/cat) 00:04:17.943 Project name: DPDK 00:04:17.943 Project version: 24.03.0 00:04:17.943 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:17.943 C linker for the host machine: cc ld.bfd 2.40-14 00:04:17.943 Host machine cpu family: x86_64 00:04:17.943 Host machine cpu: x86_64 00:04:17.943 Message: ## Building in Developer Mode ## 00:04:17.943 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:17.943 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:17.943 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:17.943 Program python3 found: YES (/usr/bin/python3) 00:04:17.943 Program cat found: YES (/usr/bin/cat) 00:04:17.943 Compiler for C supports arguments -march=native: YES 00:04:17.943 Checking for size of "void *" : 8 00:04:17.943 Checking for size of "void *" : 8 (cached) 00:04:17.943 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:17.943 Library m found: YES 00:04:17.943 Library numa found: YES 00:04:17.943 Has header "numaif.h" : YES 00:04:17.943 Library fdt found: NO 00:04:17.943 Library execinfo found: NO 00:04:17.943 Has header "execinfo.h" : YES 00:04:17.943 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:17.943 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:17.943 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:17.943 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:17.943 Run-time dependency openssl found: YES 3.1.1 00:04:17.943 Run-time dependency libpcap found: YES 1.10.4 00:04:17.943 Has header "pcap.h" with dependency libpcap: YES 00:04:17.943 Compiler for C supports arguments -Wcast-qual: YES 00:04:17.943 Compiler for C supports arguments -Wdeprecated: YES 00:04:17.943 Compiler for C supports arguments -Wformat: YES 00:04:17.943 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:17.943 Compiler for C supports arguments -Wformat-security: NO 00:04:17.943 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:17.943 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:17.943 Compiler for C supports arguments -Wnested-externs: YES 00:04:17.943 Compiler for C supports arguments -Wold-style-definition: YES 00:04:17.943 Compiler for C supports arguments -Wpointer-arith: YES 00:04:17.943 Compiler for C supports arguments -Wsign-compare: YES 00:04:17.943 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:17.943 Compiler for C supports arguments -Wundef: YES 00:04:17.943 Compiler for C supports arguments -Wwrite-strings: YES 00:04:17.943 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:17.943 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:17.943 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:17.943 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:17.943 Program objdump found: YES (/usr/bin/objdump) 00:04:17.943 Compiler for C supports arguments -mavx512f: YES 00:04:17.943 Checking if "AVX512 checking" compiles: YES 00:04:17.943 Fetching value of define "__SSE4_2__" : 1 00:04:17.943 Fetching value of define "__AES__" : 1 00:04:17.943 Fetching value of define "__AVX__" : 1 00:04:17.943 Fetching value of define "__AVX2__" : (undefined) 00:04:17.943 Fetching value of define "__AVX512BW__" : (undefined) 00:04:17.943 Fetching value of define "__AVX512CD__" : (undefined) 00:04:17.943 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:17.943 Fetching value of define "__AVX512F__" : (undefined) 00:04:17.943 Fetching value of define "__AVX512VL__" : (undefined) 00:04:17.943 Fetching value of define "__PCLMUL__" : 1 00:04:17.943 Fetching value of define "__RDRND__" : 1 00:04:17.943 Fetching value of define "__RDSEED__" : (undefined) 00:04:17.943 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:17.943 Fetching value of define "__znver1__" : (undefined) 00:04:17.944 Fetching value of define "__znver2__" : (undefined) 00:04:17.944 Fetching value of define "__znver3__" : (undefined) 00:04:17.944 Fetching value of define "__znver4__" : (undefined) 00:04:17.944 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:17.944 Message: lib/log: Defining dependency "log" 00:04:17.944 Message: lib/kvargs: Defining dependency "kvargs" 00:04:17.944 Message: lib/telemetry: Defining dependency "telemetry" 00:04:17.944 Checking for function "getentropy" : NO 00:04:17.944 Message: lib/eal: Defining dependency "eal" 00:04:17.944 Message: lib/ring: Defining dependency "ring" 00:04:17.944 Message: lib/rcu: Defining dependency "rcu" 00:04:17.944 Message: lib/mempool: Defining dependency "mempool" 00:04:17.944 Message: lib/mbuf: Defining dependency "mbuf" 00:04:17.944 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:17.944 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:17.944 Compiler for C supports arguments -mpclmul: YES 00:04:17.944 Compiler for C supports arguments -maes: YES 00:04:17.944 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:17.944 Compiler for C supports arguments -mavx512bw: YES 00:04:17.944 Compiler for C supports arguments -mavx512dq: YES 00:04:17.944 Compiler for C supports arguments -mavx512vl: YES 00:04:17.944 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:17.944 Compiler for C supports arguments -mavx2: YES 00:04:17.944 Compiler for C supports arguments -mavx: YES 00:04:17.944 Message: lib/net: Defining dependency "net" 00:04:17.944 Message: lib/meter: Defining dependency "meter" 00:04:17.944 Message: lib/ethdev: Defining dependency "ethdev" 00:04:17.944 Message: lib/pci: Defining dependency "pci" 00:04:17.944 Message: lib/cmdline: Defining dependency "cmdline" 00:04:17.944 Message: lib/hash: Defining dependency "hash" 00:04:17.944 Message: lib/timer: Defining dependency "timer" 00:04:17.944 Message: lib/compressdev: Defining dependency "compressdev" 00:04:17.944 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:17.944 Message: lib/dmadev: Defining dependency "dmadev" 00:04:17.944 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:17.944 Message: lib/power: Defining dependency "power" 00:04:17.944 Message: lib/reorder: Defining dependency "reorder" 00:04:17.944 Message: lib/security: Defining dependency "security" 00:04:17.944 Has header "linux/userfaultfd.h" : YES 00:04:17.944 Has header "linux/vduse.h" : YES 00:04:17.944 Message: lib/vhost: Defining dependency "vhost" 00:04:17.944 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:17.944 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:17.944 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:17.944 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:17.944 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:17.944 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:17.944 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:17.944 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:17.944 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:17.944 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:17.944 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:17.944 Configuring doxy-api-html.conf using configuration 00:04:17.944 Configuring doxy-api-man.conf using configuration 00:04:17.944 Program mandb found: YES (/usr/bin/mandb) 00:04:17.944 Program sphinx-build found: NO 00:04:17.944 Configuring rte_build_config.h using configuration 00:04:17.944 Message: 00:04:17.944 ================= 00:04:17.944 Applications Enabled 00:04:17.944 ================= 00:04:17.944 00:04:17.944 apps: 00:04:17.944 00:04:17.944 00:04:17.944 Message: 00:04:17.944 ================= 00:04:17.944 Libraries Enabled 00:04:17.944 ================= 00:04:17.944 00:04:17.944 libs: 00:04:17.944 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:17.944 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:17.944 cryptodev, dmadev, power, reorder, security, vhost, 00:04:17.944 00:04:17.944 Message: 00:04:17.944 =============== 00:04:17.944 Drivers Enabled 00:04:17.944 =============== 00:04:17.944 00:04:17.944 common: 00:04:17.944 00:04:17.944 bus: 00:04:17.944 pci, vdev, 00:04:17.944 mempool: 00:04:17.944 ring, 00:04:17.944 dma: 00:04:17.944 00:04:17.944 net: 00:04:17.944 00:04:17.944 crypto: 00:04:17.944 00:04:17.944 compress: 00:04:17.944 00:04:17.944 vdpa: 00:04:17.944 00:04:17.944 00:04:17.944 Message: 00:04:17.944 ================= 00:04:17.944 Content Skipped 00:04:17.944 ================= 00:04:17.944 00:04:17.944 apps: 00:04:17.944 dumpcap: explicitly disabled via build config 00:04:17.944 graph: explicitly disabled via build config 00:04:17.944 pdump: explicitly disabled via build config 00:04:17.944 proc-info: explicitly disabled via build config 00:04:17.944 test-acl: explicitly disabled via build config 00:04:17.944 test-bbdev: explicitly disabled via build config 00:04:17.944 test-cmdline: explicitly disabled via build config 00:04:17.944 test-compress-perf: explicitly disabled via build config 00:04:17.944 test-crypto-perf: explicitly disabled via build config 00:04:17.944 test-dma-perf: explicitly disabled via build config 00:04:17.944 test-eventdev: explicitly disabled via build config 00:04:17.944 test-fib: explicitly disabled via build config 00:04:17.944 test-flow-perf: explicitly disabled via build config 00:04:17.944 test-gpudev: explicitly disabled via build config 00:04:17.944 test-mldev: explicitly disabled via build config 00:04:17.944 test-pipeline: explicitly disabled via build config 00:04:17.944 test-pmd: explicitly disabled via build config 00:04:17.944 test-regex: explicitly disabled via build config 00:04:17.944 test-sad: explicitly disabled via build config 00:04:17.944 test-security-perf: explicitly disabled via build config 00:04:17.944 00:04:17.944 libs: 00:04:17.944 argparse: explicitly disabled via build config 00:04:17.944 metrics: explicitly disabled via build config 00:04:17.944 acl: explicitly disabled via build config 00:04:17.944 bbdev: explicitly disabled via build config 00:04:17.944 bitratestats: explicitly disabled via build config 00:04:17.944 bpf: explicitly disabled via build config 00:04:17.944 cfgfile: explicitly disabled via build config 00:04:17.944 distributor: explicitly disabled via build config 00:04:17.944 efd: explicitly disabled via build config 00:04:17.944 eventdev: explicitly disabled via build config 00:04:17.944 dispatcher: explicitly disabled via build config 00:04:17.944 gpudev: explicitly disabled via build config 00:04:17.944 gro: explicitly disabled via build config 00:04:17.944 gso: explicitly disabled via build config 00:04:17.944 ip_frag: explicitly disabled via build config 00:04:17.944 jobstats: explicitly disabled via build config 00:04:17.944 latencystats: explicitly disabled via build config 00:04:17.944 lpm: explicitly disabled via build config 00:04:17.944 member: explicitly disabled via build config 00:04:17.944 pcapng: explicitly disabled via build config 00:04:17.944 rawdev: explicitly disabled via build config 00:04:17.944 regexdev: explicitly disabled via build config 00:04:17.944 mldev: explicitly disabled via build config 00:04:17.944 rib: explicitly disabled via build config 00:04:17.944 sched: explicitly disabled via build config 00:04:17.944 stack: explicitly disabled via build config 00:04:17.944 ipsec: explicitly disabled via build config 00:04:17.944 pdcp: explicitly disabled via build config 00:04:17.944 fib: explicitly disabled via build config 00:04:17.944 port: explicitly disabled via build config 00:04:17.944 pdump: explicitly disabled via build config 00:04:17.944 table: explicitly disabled via build config 00:04:17.944 pipeline: explicitly disabled via build config 00:04:17.944 graph: explicitly disabled via build config 00:04:17.944 node: explicitly disabled via build config 00:04:17.944 00:04:17.944 drivers: 00:04:17.944 common/cpt: not in enabled drivers build config 00:04:17.944 common/dpaax: not in enabled drivers build config 00:04:17.944 common/iavf: not in enabled drivers build config 00:04:17.944 common/idpf: not in enabled drivers build config 00:04:17.944 common/ionic: not in enabled drivers build config 00:04:17.944 common/mvep: not in enabled drivers build config 00:04:17.944 common/octeontx: not in enabled drivers build config 00:04:17.944 bus/auxiliary: not in enabled drivers build config 00:04:17.944 bus/cdx: not in enabled drivers build config 00:04:17.944 bus/dpaa: not in enabled drivers build config 00:04:17.944 bus/fslmc: not in enabled drivers build config 00:04:17.944 bus/ifpga: not in enabled drivers build config 00:04:17.944 bus/platform: not in enabled drivers build config 00:04:17.944 bus/uacce: not in enabled drivers build config 00:04:17.944 bus/vmbus: not in enabled drivers build config 00:04:17.944 common/cnxk: not in enabled drivers build config 00:04:17.944 common/mlx5: not in enabled drivers build config 00:04:17.944 common/nfp: not in enabled drivers build config 00:04:17.944 common/nitrox: not in enabled drivers build config 00:04:17.944 common/qat: not in enabled drivers build config 00:04:17.944 common/sfc_efx: not in enabled drivers build config 00:04:17.944 mempool/bucket: not in enabled drivers build config 00:04:17.945 mempool/cnxk: not in enabled drivers build config 00:04:17.945 mempool/dpaa: not in enabled drivers build config 00:04:17.945 mempool/dpaa2: not in enabled drivers build config 00:04:17.945 mempool/octeontx: not in enabled drivers build config 00:04:17.945 mempool/stack: not in enabled drivers build config 00:04:17.945 dma/cnxk: not in enabled drivers build config 00:04:17.945 dma/dpaa: not in enabled drivers build config 00:04:17.945 dma/dpaa2: not in enabled drivers build config 00:04:17.945 dma/hisilicon: not in enabled drivers build config 00:04:17.945 dma/idxd: not in enabled drivers build config 00:04:17.945 dma/ioat: not in enabled drivers build config 00:04:17.945 dma/skeleton: not in enabled drivers build config 00:04:17.945 net/af_packet: not in enabled drivers build config 00:04:17.945 net/af_xdp: not in enabled drivers build config 00:04:17.945 net/ark: not in enabled drivers build config 00:04:17.945 net/atlantic: not in enabled drivers build config 00:04:17.945 net/avp: not in enabled drivers build config 00:04:17.945 net/axgbe: not in enabled drivers build config 00:04:17.945 net/bnx2x: not in enabled drivers build config 00:04:17.945 net/bnxt: not in enabled drivers build config 00:04:17.945 net/bonding: not in enabled drivers build config 00:04:17.945 net/cnxk: not in enabled drivers build config 00:04:17.945 net/cpfl: not in enabled drivers build config 00:04:17.945 net/cxgbe: not in enabled drivers build config 00:04:17.945 net/dpaa: not in enabled drivers build config 00:04:17.945 net/dpaa2: not in enabled drivers build config 00:04:17.945 net/e1000: not in enabled drivers build config 00:04:17.945 net/ena: not in enabled drivers build config 00:04:17.945 net/enetc: not in enabled drivers build config 00:04:17.945 net/enetfec: not in enabled drivers build config 00:04:17.945 net/enic: not in enabled drivers build config 00:04:17.945 net/failsafe: not in enabled drivers build config 00:04:17.945 net/fm10k: not in enabled drivers build config 00:04:17.945 net/gve: not in enabled drivers build config 00:04:17.945 net/hinic: not in enabled drivers build config 00:04:17.945 net/hns3: not in enabled drivers build config 00:04:17.945 net/i40e: not in enabled drivers build config 00:04:17.945 net/iavf: not in enabled drivers build config 00:04:17.945 net/ice: not in enabled drivers build config 00:04:17.945 net/idpf: not in enabled drivers build config 00:04:17.945 net/igc: not in enabled drivers build config 00:04:17.945 net/ionic: not in enabled drivers build config 00:04:17.945 net/ipn3ke: not in enabled drivers build config 00:04:17.945 net/ixgbe: not in enabled drivers build config 00:04:17.945 net/mana: not in enabled drivers build config 00:04:17.945 net/memif: not in enabled drivers build config 00:04:17.945 net/mlx4: not in enabled drivers build config 00:04:17.945 net/mlx5: not in enabled drivers build config 00:04:17.945 net/mvneta: not in enabled drivers build config 00:04:17.945 net/mvpp2: not in enabled drivers build config 00:04:17.945 net/netvsc: not in enabled drivers build config 00:04:17.945 net/nfb: not in enabled drivers build config 00:04:17.945 net/nfp: not in enabled drivers build config 00:04:17.945 net/ngbe: not in enabled drivers build config 00:04:17.945 net/null: not in enabled drivers build config 00:04:17.945 net/octeontx: not in enabled drivers build config 00:04:17.945 net/octeon_ep: not in enabled drivers build config 00:04:17.945 net/pcap: not in enabled drivers build config 00:04:17.945 net/pfe: not in enabled drivers build config 00:04:17.945 net/qede: not in enabled drivers build config 00:04:17.945 net/ring: not in enabled drivers build config 00:04:17.945 net/sfc: not in enabled drivers build config 00:04:17.945 net/softnic: not in enabled drivers build config 00:04:17.945 net/tap: not in enabled drivers build config 00:04:17.945 net/thunderx: not in enabled drivers build config 00:04:17.945 net/txgbe: not in enabled drivers build config 00:04:17.945 net/vdev_netvsc: not in enabled drivers build config 00:04:17.945 net/vhost: not in enabled drivers build config 00:04:17.945 net/virtio: not in enabled drivers build config 00:04:17.945 net/vmxnet3: not in enabled drivers build config 00:04:17.945 raw/*: missing internal dependency, "rawdev" 00:04:17.945 crypto/armv8: not in enabled drivers build config 00:04:17.945 crypto/bcmfs: not in enabled drivers build config 00:04:17.945 crypto/caam_jr: not in enabled drivers build config 00:04:17.945 crypto/ccp: not in enabled drivers build config 00:04:17.945 crypto/cnxk: not in enabled drivers build config 00:04:17.945 crypto/dpaa_sec: not in enabled drivers build config 00:04:17.945 crypto/dpaa2_sec: not in enabled drivers build config 00:04:17.945 crypto/ipsec_mb: not in enabled drivers build config 00:04:17.945 crypto/mlx5: not in enabled drivers build config 00:04:17.945 crypto/mvsam: not in enabled drivers build config 00:04:17.945 crypto/nitrox: not in enabled drivers build config 00:04:17.945 crypto/null: not in enabled drivers build config 00:04:17.945 crypto/octeontx: not in enabled drivers build config 00:04:17.945 crypto/openssl: not in enabled drivers build config 00:04:17.945 crypto/scheduler: not in enabled drivers build config 00:04:17.945 crypto/uadk: not in enabled drivers build config 00:04:17.945 crypto/virtio: not in enabled drivers build config 00:04:17.945 compress/isal: not in enabled drivers build config 00:04:17.945 compress/mlx5: not in enabled drivers build config 00:04:17.945 compress/nitrox: not in enabled drivers build config 00:04:17.945 compress/octeontx: not in enabled drivers build config 00:04:17.945 compress/zlib: not in enabled drivers build config 00:04:17.945 regex/*: missing internal dependency, "regexdev" 00:04:17.945 ml/*: missing internal dependency, "mldev" 00:04:17.945 vdpa/ifc: not in enabled drivers build config 00:04:17.945 vdpa/mlx5: not in enabled drivers build config 00:04:17.945 vdpa/nfp: not in enabled drivers build config 00:04:17.945 vdpa/sfc: not in enabled drivers build config 00:04:17.945 event/*: missing internal dependency, "eventdev" 00:04:17.945 baseband/*: missing internal dependency, "bbdev" 00:04:17.945 gpu/*: missing internal dependency, "gpudev" 00:04:17.945 00:04:17.945 00:04:17.945 Build targets in project: 85 00:04:17.945 00:04:17.945 DPDK 24.03.0 00:04:17.945 00:04:17.945 User defined options 00:04:17.945 buildtype : debug 00:04:17.945 default_library : shared 00:04:17.945 libdir : lib 00:04:17.945 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:17.945 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:17.945 c_link_args : 00:04:17.945 cpu_instruction_set: native 00:04:17.945 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:04:17.945 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:04:17.945 enable_docs : false 00:04:17.945 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:17.945 enable_kmods : false 00:04:17.945 max_lcores : 128 00:04:17.945 tests : false 00:04:17.945 00:04:17.945 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:18.211 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:18.211 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:18.211 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:18.211 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:18.211 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:18.211 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:18.211 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:18.211 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:18.211 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:18.211 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:18.211 [10/268] Linking static target lib/librte_kvargs.a 00:04:18.211 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:18.211 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:18.211 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:18.211 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:18.211 [15/268] Linking static target lib/librte_log.a 00:04:18.471 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:18.731 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.992 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:18.992 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:18.992 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:18.992 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:18.992 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:18.993 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:18.993 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:18.993 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:18.993 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:18.993 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:18.993 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:18.993 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:18.993 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:18.993 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:18.993 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:18.993 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:18.993 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:18.993 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:18.993 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:18.993 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:18.993 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:18.993 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:19.254 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:19.254 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:19.254 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:19.254 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:19.254 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:19.254 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:19.254 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:19.255 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:19.255 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:19.255 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:19.255 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:19.255 [51/268] Linking static target lib/librte_telemetry.a 00:04:19.255 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:19.255 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:19.255 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:19.255 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:19.255 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:19.255 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:19.255 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:19.255 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:19.255 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:19.255 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:19.516 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:19.516 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:19.516 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:19.516 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.516 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:19.516 [67/268] Linking target lib/librte_log.so.24.1 00:04:19.776 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:19.776 [69/268] Linking static target lib/librte_pci.a 00:04:19.776 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:19.776 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:19.776 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:19.776 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:20.040 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:20.040 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:20.040 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:20.040 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:20.040 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:20.040 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:20.040 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:20.040 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:20.040 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:20.040 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:20.040 [84/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:20.040 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:20.040 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:20.040 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:20.040 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:20.040 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:20.040 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:20.040 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:20.040 [92/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:20.040 [93/268] Linking static target lib/librte_meter.a 00:04:20.040 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:20.301 [95/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:20.301 [96/268] Linking target lib/librte_kvargs.so.24.1 00:04:20.301 [97/268] Linking static target lib/librte_ring.a 00:04:20.301 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:20.301 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:20.301 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:20.301 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:20.301 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:20.301 [103/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.301 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:20.301 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:20.301 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.301 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:20.301 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:20.301 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:20.301 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:20.301 [111/268] Linking static target lib/librte_rcu.a 00:04:20.301 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:20.301 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:20.301 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:20.301 [115/268] Linking target lib/librte_telemetry.so.24.1 00:04:20.301 [116/268] Linking static target lib/librte_mempool.a 00:04:20.301 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:20.301 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:20.301 [119/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:20.301 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:20.562 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:20.562 [122/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:20.562 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:20.562 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:20.562 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:20.562 [126/268] Linking static target lib/librte_eal.a 00:04:20.562 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:20.562 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:20.562 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:20.562 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:20.562 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:20.562 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:20.562 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:20.562 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:20.820 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.820 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:20.820 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:20.820 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:20.820 [139/268] Linking static target lib/librte_net.a 00:04:20.820 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:20.820 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.820 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:21.079 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:21.079 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:21.079 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.079 [146/268] Linking static target lib/librte_cmdline.a 00:04:21.079 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:21.079 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:21.079 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:21.079 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:21.079 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:21.079 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:21.337 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:21.337 [154/268] Linking static target lib/librte_timer.a 00:04:21.337 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:21.338 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.338 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:21.338 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:21.338 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:21.338 [160/268] Linking static target lib/librte_dmadev.a 00:04:21.338 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:21.338 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:21.338 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:21.596 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.596 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:21.596 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:21.596 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:21.596 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:21.596 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:21.596 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:21.596 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:21.596 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.596 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:21.596 [174/268] Linking static target lib/librte_power.a 00:04:21.596 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:21.596 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:21.596 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:21.596 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:21.596 [179/268] Linking static target lib/librte_hash.a 00:04:21.596 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:21.596 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:21.596 [182/268] Linking static target lib/librte_compressdev.a 00:04:21.596 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:21.854 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:21.854 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:21.854 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:21.854 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:21.854 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.854 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:21.854 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:21.854 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.854 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:21.854 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:21.854 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:22.113 [195/268] Linking static target drivers/librte_bus_vdev.a 00:04:22.113 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:22.113 [197/268] Linking static target lib/librte_mbuf.a 00:04:22.113 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:22.113 [199/268] Linking static target lib/librte_security.a 00:04:22.113 [200/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:22.113 [201/268] Linking static target lib/librte_reorder.a 00:04:22.113 [202/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:22.113 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:22.113 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:22.113 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.113 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:22.113 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.113 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:22.113 [209/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.113 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.113 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.113 [212/268] Linking static target drivers/librte_bus_pci.a 00:04:22.113 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.371 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:22.371 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:22.371 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.371 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.371 [218/268] Linking static target drivers/librte_mempool_ring.a 00:04:22.371 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.371 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.371 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:22.371 [222/268] Linking static target lib/librte_ethdev.a 00:04:22.371 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.629 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:22.629 [225/268] Linking static target lib/librte_cryptodev.a 00:04:22.629 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.563 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.937 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:26.834 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.834 [230/268] Linking target lib/librte_eal.so.24.1 00:04:26.834 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.834 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:27.090 [233/268] Linking target lib/librte_ring.so.24.1 00:04:27.090 [234/268] Linking target lib/librte_timer.so.24.1 00:04:27.090 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:27.090 [236/268] Linking target lib/librte_meter.so.24.1 00:04:27.090 [237/268] Linking target lib/librte_pci.so.24.1 00:04:27.090 [238/268] Linking target lib/librte_dmadev.so.24.1 00:04:27.090 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:27.090 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:27.090 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:27.090 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:27.090 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:27.090 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:27.090 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:27.090 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:27.347 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:27.347 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:27.347 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:27.347 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:27.347 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:27.604 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:27.604 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:27.604 [254/268] Linking target lib/librte_net.so.24.1 00:04:27.604 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:27.604 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:27.604 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:27.604 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:27.604 [259/268] Linking target lib/librte_security.so.24.1 00:04:27.604 [260/268] Linking target lib/librte_hash.so.24.1 00:04:27.604 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:27.861 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:27.861 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:27.861 [264/268] Linking target lib/librte_power.so.24.1 00:04:31.140 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:31.140 [266/268] Linking static target lib/librte_vhost.a 00:04:31.706 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.706 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:31.706 INFO: autodetecting backend as ninja 00:04:31.706 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:53.646 CC lib/log/log.o 00:04:53.646 CC lib/log/log_flags.o 00:04:53.646 CC lib/log/log_deprecated.o 00:04:53.646 CC lib/ut/ut.o 00:04:53.646 CC lib/ut_mock/mock.o 00:04:53.646 LIB libspdk_ut.a 00:04:53.646 LIB libspdk_ut_mock.a 00:04:53.646 LIB libspdk_log.a 00:04:53.646 SO libspdk_ut.so.2.0 00:04:53.646 SO libspdk_ut_mock.so.6.0 00:04:53.646 SO libspdk_log.so.7.1 00:04:53.646 SYMLINK libspdk_ut.so 00:04:53.646 SYMLINK libspdk_ut_mock.so 00:04:53.646 SYMLINK libspdk_log.so 00:04:53.646 CC lib/dma/dma.o 00:04:53.646 CXX lib/trace_parser/trace.o 00:04:53.646 CC lib/ioat/ioat.o 00:04:53.646 CC lib/util/base64.o 00:04:53.646 CC lib/util/bit_array.o 00:04:53.646 CC lib/util/cpuset.o 00:04:53.646 CC lib/util/crc16.o 00:04:53.646 CC lib/util/crc32.o 00:04:53.646 CC lib/util/crc32c.o 00:04:53.646 CC lib/util/crc32_ieee.o 00:04:53.646 CC lib/util/crc64.o 00:04:53.646 CC lib/util/dif.o 00:04:53.646 CC lib/util/fd.o 00:04:53.646 CC lib/util/fd_group.o 00:04:53.646 CC lib/util/file.o 00:04:53.646 CC lib/util/hexlify.o 00:04:53.646 CC lib/util/iov.o 00:04:53.646 CC lib/util/math.o 00:04:53.646 CC lib/util/net.o 00:04:53.646 CC lib/util/pipe.o 00:04:53.646 CC lib/util/strerror_tls.o 00:04:53.646 CC lib/util/string.o 00:04:53.646 CC lib/util/uuid.o 00:04:53.646 CC lib/util/xor.o 00:04:53.646 CC lib/util/md5.o 00:04:53.646 CC lib/util/zipf.o 00:04:53.646 CC lib/vfio_user/host/vfio_user_pci.o 00:04:53.646 CC lib/vfio_user/host/vfio_user.o 00:04:53.646 LIB libspdk_dma.a 00:04:53.646 SO libspdk_dma.so.5.0 00:04:53.646 LIB libspdk_ioat.a 00:04:53.646 SYMLINK libspdk_dma.so 00:04:53.646 SO libspdk_ioat.so.7.0 00:04:53.646 SYMLINK libspdk_ioat.so 00:04:53.646 LIB libspdk_vfio_user.a 00:04:53.646 SO libspdk_vfio_user.so.5.0 00:04:53.646 SYMLINK libspdk_vfio_user.so 00:04:53.646 LIB libspdk_util.a 00:04:53.646 SO libspdk_util.so.10.1 00:04:53.646 SYMLINK libspdk_util.so 00:04:53.646 CC lib/idxd/idxd.o 00:04:53.646 CC lib/idxd/idxd_user.o 00:04:53.646 CC lib/idxd/idxd_kernel.o 00:04:53.646 CC lib/vmd/vmd.o 00:04:53.646 CC lib/vmd/led.o 00:04:53.646 CC lib/conf/conf.o 00:04:53.646 CC lib/json/json_parse.o 00:04:53.646 CC lib/rdma_utils/rdma_utils.o 00:04:53.646 CC lib/json/json_util.o 00:04:53.646 CC lib/env_dpdk/env.o 00:04:53.646 CC lib/json/json_write.o 00:04:53.647 CC lib/env_dpdk/memory.o 00:04:53.647 CC lib/env_dpdk/pci.o 00:04:53.647 CC lib/env_dpdk/init.o 00:04:53.647 CC lib/env_dpdk/threads.o 00:04:53.647 CC lib/env_dpdk/pci_ioat.o 00:04:53.647 CC lib/env_dpdk/pci_virtio.o 00:04:53.647 CC lib/env_dpdk/pci_vmd.o 00:04:53.647 CC lib/env_dpdk/pci_idxd.o 00:04:53.647 CC lib/env_dpdk/pci_event.o 00:04:53.647 CC lib/env_dpdk/sigbus_handler.o 00:04:53.647 CC lib/env_dpdk/pci_dpdk.o 00:04:53.647 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:53.647 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:53.647 LIB libspdk_trace_parser.a 00:04:53.647 SO libspdk_trace_parser.so.6.0 00:04:53.647 SYMLINK libspdk_trace_parser.so 00:04:53.647 LIB libspdk_conf.a 00:04:53.647 SO libspdk_conf.so.6.0 00:04:53.647 LIB libspdk_rdma_utils.a 00:04:53.647 LIB libspdk_json.a 00:04:53.647 SYMLINK libspdk_conf.so 00:04:53.647 SO libspdk_rdma_utils.so.1.0 00:04:53.647 SO libspdk_json.so.6.0 00:04:53.647 SYMLINK libspdk_rdma_utils.so 00:04:53.647 SYMLINK libspdk_json.so 00:04:53.647 CC lib/rdma_provider/common.o 00:04:53.647 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:53.647 CC lib/jsonrpc/jsonrpc_server.o 00:04:53.647 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:53.647 CC lib/jsonrpc/jsonrpc_client.o 00:04:53.647 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:53.647 LIB libspdk_idxd.a 00:04:53.647 SO libspdk_idxd.so.12.1 00:04:53.647 LIB libspdk_vmd.a 00:04:53.647 SO libspdk_vmd.so.6.0 00:04:53.647 SYMLINK libspdk_idxd.so 00:04:53.647 SYMLINK libspdk_vmd.so 00:04:53.647 LIB libspdk_rdma_provider.a 00:04:53.647 SO libspdk_rdma_provider.so.7.0 00:04:53.647 LIB libspdk_jsonrpc.a 00:04:53.647 SYMLINK libspdk_rdma_provider.so 00:04:53.647 SO libspdk_jsonrpc.so.6.0 00:04:53.647 SYMLINK libspdk_jsonrpc.so 00:04:53.647 CC lib/rpc/rpc.o 00:04:53.904 LIB libspdk_rpc.a 00:04:53.904 SO libspdk_rpc.so.6.0 00:04:54.161 SYMLINK libspdk_rpc.so 00:04:54.161 CC lib/trace/trace.o 00:04:54.161 CC lib/trace/trace_flags.o 00:04:54.161 CC lib/trace/trace_rpc.o 00:04:54.161 CC lib/notify/notify.o 00:04:54.161 CC lib/keyring/keyring.o 00:04:54.161 CC lib/notify/notify_rpc.o 00:04:54.161 CC lib/keyring/keyring_rpc.o 00:04:54.418 LIB libspdk_notify.a 00:04:54.418 SO libspdk_notify.so.6.0 00:04:54.418 SYMLINK libspdk_notify.so 00:04:54.418 LIB libspdk_keyring.a 00:04:54.418 LIB libspdk_trace.a 00:04:54.418 SO libspdk_keyring.so.2.0 00:04:54.418 SO libspdk_trace.so.11.0 00:04:54.418 SYMLINK libspdk_keyring.so 00:04:54.677 SYMLINK libspdk_trace.so 00:04:54.677 CC lib/thread/thread.o 00:04:54.677 CC lib/thread/iobuf.o 00:04:54.677 CC lib/sock/sock.o 00:04:54.677 CC lib/sock/sock_rpc.o 00:04:54.936 LIB libspdk_env_dpdk.a 00:04:54.936 SO libspdk_env_dpdk.so.15.1 00:04:54.936 SYMLINK libspdk_env_dpdk.so 00:04:55.193 LIB libspdk_sock.a 00:04:55.193 SO libspdk_sock.so.10.0 00:04:55.193 SYMLINK libspdk_sock.so 00:04:55.464 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:55.464 CC lib/nvme/nvme_ctrlr.o 00:04:55.464 CC lib/nvme/nvme_fabric.o 00:04:55.464 CC lib/nvme/nvme_ns_cmd.o 00:04:55.464 CC lib/nvme/nvme_ns.o 00:04:55.464 CC lib/nvme/nvme_pcie_common.o 00:04:55.464 CC lib/nvme/nvme_pcie.o 00:04:55.464 CC lib/nvme/nvme_qpair.o 00:04:55.464 CC lib/nvme/nvme.o 00:04:55.464 CC lib/nvme/nvme_quirks.o 00:04:55.464 CC lib/nvme/nvme_transport.o 00:04:55.464 CC lib/nvme/nvme_discovery.o 00:04:55.464 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:55.464 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:55.464 CC lib/nvme/nvme_tcp.o 00:04:55.464 CC lib/nvme/nvme_opal.o 00:04:55.464 CC lib/nvme/nvme_io_msg.o 00:04:55.464 CC lib/nvme/nvme_poll_group.o 00:04:55.464 CC lib/nvme/nvme_zns.o 00:04:55.464 CC lib/nvme/nvme_stubs.o 00:04:55.464 CC lib/nvme/nvme_auth.o 00:04:55.464 CC lib/nvme/nvme_cuse.o 00:04:55.464 CC lib/nvme/nvme_rdma.o 00:04:55.464 CC lib/nvme/nvme_vfio_user.o 00:04:56.396 LIB libspdk_thread.a 00:04:56.396 SO libspdk_thread.so.11.0 00:04:56.396 SYMLINK libspdk_thread.so 00:04:56.654 CC lib/virtio/virtio.o 00:04:56.654 CC lib/fsdev/fsdev.o 00:04:56.654 CC lib/blob/blobstore.o 00:04:56.654 CC lib/fsdev/fsdev_io.o 00:04:56.654 CC lib/vfu_tgt/tgt_endpoint.o 00:04:56.654 CC lib/virtio/virtio_vhost_user.o 00:04:56.654 CC lib/accel/accel.o 00:04:56.654 CC lib/blob/request.o 00:04:56.654 CC lib/fsdev/fsdev_rpc.o 00:04:56.654 CC lib/virtio/virtio_vfio_user.o 00:04:56.654 CC lib/blob/zeroes.o 00:04:56.654 CC lib/accel/accel_rpc.o 00:04:56.654 CC lib/init/json_config.o 00:04:56.654 CC lib/vfu_tgt/tgt_rpc.o 00:04:56.654 CC lib/virtio/virtio_pci.o 00:04:56.654 CC lib/accel/accel_sw.o 00:04:56.654 CC lib/blob/blob_bs_dev.o 00:04:56.654 CC lib/init/subsystem.o 00:04:56.654 CC lib/init/subsystem_rpc.o 00:04:56.654 CC lib/init/rpc.o 00:04:56.911 LIB libspdk_init.a 00:04:56.911 SO libspdk_init.so.6.0 00:04:56.911 SYMLINK libspdk_init.so 00:04:56.911 LIB libspdk_virtio.a 00:04:56.911 LIB libspdk_vfu_tgt.a 00:04:56.911 SO libspdk_vfu_tgt.so.3.0 00:04:56.911 SO libspdk_virtio.so.7.0 00:04:57.169 SYMLINK libspdk_vfu_tgt.so 00:04:57.169 SYMLINK libspdk_virtio.so 00:04:57.169 CC lib/event/app.o 00:04:57.169 CC lib/event/reactor.o 00:04:57.169 CC lib/event/log_rpc.o 00:04:57.169 CC lib/event/app_rpc.o 00:04:57.169 CC lib/event/scheduler_static.o 00:04:57.426 LIB libspdk_fsdev.a 00:04:57.426 SO libspdk_fsdev.so.2.0 00:04:57.426 SYMLINK libspdk_fsdev.so 00:04:57.683 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:57.683 LIB libspdk_event.a 00:04:57.683 SO libspdk_event.so.14.0 00:04:57.683 SYMLINK libspdk_event.so 00:04:57.683 LIB libspdk_accel.a 00:04:57.939 SO libspdk_accel.so.16.0 00:04:57.939 SYMLINK libspdk_accel.so 00:04:57.939 LIB libspdk_nvme.a 00:04:57.939 SO libspdk_nvme.so.15.0 00:04:57.939 CC lib/bdev/bdev.o 00:04:57.939 CC lib/bdev/bdev_rpc.o 00:04:57.939 CC lib/bdev/bdev_zone.o 00:04:57.939 CC lib/bdev/part.o 00:04:57.939 CC lib/bdev/scsi_nvme.o 00:04:58.197 SYMLINK libspdk_nvme.so 00:04:58.197 LIB libspdk_fuse_dispatcher.a 00:04:58.197 SO libspdk_fuse_dispatcher.so.1.0 00:04:58.197 SYMLINK libspdk_fuse_dispatcher.so 00:05:00.115 LIB libspdk_blob.a 00:05:00.115 SO libspdk_blob.so.12.0 00:05:00.115 SYMLINK libspdk_blob.so 00:05:00.115 CC lib/lvol/lvol.o 00:05:00.115 CC lib/blobfs/blobfs.o 00:05:00.115 CC lib/blobfs/tree.o 00:05:01.058 LIB libspdk_bdev.a 00:05:01.058 SO libspdk_bdev.so.17.0 00:05:01.058 LIB libspdk_blobfs.a 00:05:01.058 SO libspdk_blobfs.so.11.0 00:05:01.058 SYMLINK libspdk_bdev.so 00:05:01.058 LIB libspdk_lvol.a 00:05:01.058 SYMLINK libspdk_blobfs.so 00:05:01.058 SO libspdk_lvol.so.11.0 00:05:01.058 SYMLINK libspdk_lvol.so 00:05:01.058 CC lib/scsi/dev.o 00:05:01.058 CC lib/nvmf/ctrlr.o 00:05:01.058 CC lib/scsi/lun.o 00:05:01.058 CC lib/ublk/ublk.o 00:05:01.058 CC lib/nbd/nbd.o 00:05:01.058 CC lib/nvmf/ctrlr_discovery.o 00:05:01.058 CC lib/scsi/port.o 00:05:01.058 CC lib/ublk/ublk_rpc.o 00:05:01.058 CC lib/ftl/ftl_core.o 00:05:01.058 CC lib/scsi/scsi.o 00:05:01.058 CC lib/nbd/nbd_rpc.o 00:05:01.058 CC lib/nvmf/ctrlr_bdev.o 00:05:01.058 CC lib/ftl/ftl_init.o 00:05:01.058 CC lib/scsi/scsi_bdev.o 00:05:01.058 CC lib/ftl/ftl_layout.o 00:05:01.058 CC lib/scsi/scsi_pr.o 00:05:01.058 CC lib/nvmf/subsystem.o 00:05:01.058 CC lib/ftl/ftl_debug.o 00:05:01.058 CC lib/nvmf/nvmf.o 00:05:01.058 CC lib/scsi/scsi_rpc.o 00:05:01.058 CC lib/ftl/ftl_io.o 00:05:01.058 CC lib/scsi/task.o 00:05:01.058 CC lib/nvmf/nvmf_rpc.o 00:05:01.058 CC lib/ftl/ftl_l2p.o 00:05:01.058 CC lib/ftl/ftl_sb.o 00:05:01.058 CC lib/nvmf/transport.o 00:05:01.058 CC lib/nvmf/tcp.o 00:05:01.058 CC lib/nvmf/stubs.o 00:05:01.058 CC lib/ftl/ftl_l2p_flat.o 00:05:01.058 CC lib/nvmf/mdns_server.o 00:05:01.058 CC lib/ftl/ftl_nv_cache.o 00:05:01.058 CC lib/ftl/ftl_band.o 00:05:01.058 CC lib/nvmf/vfio_user.o 00:05:01.058 CC lib/nvmf/rdma.o 00:05:01.058 CC lib/nvmf/auth.o 00:05:01.058 CC lib/ftl/ftl_band_ops.o 00:05:01.058 CC lib/ftl/ftl_rq.o 00:05:01.058 CC lib/ftl/ftl_writer.o 00:05:01.058 CC lib/ftl/ftl_reloc.o 00:05:01.058 CC lib/ftl/ftl_l2p_cache.o 00:05:01.058 CC lib/ftl/ftl_p2l.o 00:05:01.058 CC lib/ftl/ftl_p2l_log.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:01.058 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:01.626 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:01.626 CC lib/ftl/utils/ftl_conf.o 00:05:01.626 CC lib/ftl/utils/ftl_md.o 00:05:01.626 CC lib/ftl/utils/ftl_mempool.o 00:05:01.626 CC lib/ftl/utils/ftl_bitmap.o 00:05:01.626 CC lib/ftl/utils/ftl_property.o 00:05:01.626 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:01.626 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:01.626 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:01.626 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:01.626 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:01.626 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:01.626 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:01.887 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:01.887 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:01.887 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:01.887 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:01.887 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:01.887 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:01.887 CC lib/ftl/base/ftl_base_dev.o 00:05:01.887 CC lib/ftl/base/ftl_base_bdev.o 00:05:01.887 CC lib/ftl/ftl_trace.o 00:05:01.887 LIB libspdk_nbd.a 00:05:01.887 SO libspdk_nbd.so.7.0 00:05:02.146 LIB libspdk_scsi.a 00:05:02.146 SYMLINK libspdk_nbd.so 00:05:02.146 SO libspdk_scsi.so.9.0 00:05:02.146 SYMLINK libspdk_scsi.so 00:05:02.146 LIB libspdk_ublk.a 00:05:02.146 SO libspdk_ublk.so.3.0 00:05:02.146 SYMLINK libspdk_ublk.so 00:05:02.404 CC lib/iscsi/conn.o 00:05:02.404 CC lib/vhost/vhost.o 00:05:02.404 CC lib/iscsi/init_grp.o 00:05:02.404 CC lib/vhost/vhost_rpc.o 00:05:02.404 CC lib/vhost/vhost_scsi.o 00:05:02.404 CC lib/iscsi/iscsi.o 00:05:02.404 CC lib/vhost/vhost_blk.o 00:05:02.404 CC lib/iscsi/param.o 00:05:02.404 CC lib/iscsi/portal_grp.o 00:05:02.404 CC lib/vhost/rte_vhost_user.o 00:05:02.404 CC lib/iscsi/tgt_node.o 00:05:02.404 CC lib/iscsi/iscsi_subsystem.o 00:05:02.404 CC lib/iscsi/iscsi_rpc.o 00:05:02.404 CC lib/iscsi/task.o 00:05:02.661 LIB libspdk_ftl.a 00:05:02.661 SO libspdk_ftl.so.9.0 00:05:02.919 SYMLINK libspdk_ftl.so 00:05:03.484 LIB libspdk_vhost.a 00:05:03.742 SO libspdk_vhost.so.8.0 00:05:03.742 SYMLINK libspdk_vhost.so 00:05:03.742 LIB libspdk_nvmf.a 00:05:03.742 SO libspdk_nvmf.so.20.0 00:05:03.742 LIB libspdk_iscsi.a 00:05:04.000 SO libspdk_iscsi.so.8.0 00:05:04.000 SYMLINK libspdk_nvmf.so 00:05:04.000 SYMLINK libspdk_iscsi.so 00:05:04.258 CC module/env_dpdk/env_dpdk_rpc.o 00:05:04.258 CC module/vfu_device/vfu_virtio.o 00:05:04.258 CC module/vfu_device/vfu_virtio_blk.o 00:05:04.258 CC module/vfu_device/vfu_virtio_scsi.o 00:05:04.258 CC module/vfu_device/vfu_virtio_rpc.o 00:05:04.258 CC module/vfu_device/vfu_virtio_fs.o 00:05:04.258 CC module/sock/posix/posix.o 00:05:04.258 CC module/keyring/linux/keyring.o 00:05:04.258 CC module/keyring/file/keyring.o 00:05:04.258 CC module/blob/bdev/blob_bdev.o 00:05:04.258 CC module/accel/ioat/accel_ioat.o 00:05:04.258 CC module/keyring/linux/keyring_rpc.o 00:05:04.258 CC module/accel/error/accel_error.o 00:05:04.258 CC module/keyring/file/keyring_rpc.o 00:05:04.258 CC module/accel/ioat/accel_ioat_rpc.o 00:05:04.258 CC module/accel/error/accel_error_rpc.o 00:05:04.258 CC module/fsdev/aio/fsdev_aio.o 00:05:04.258 CC module/accel/dsa/accel_dsa.o 00:05:04.258 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:04.258 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:04.258 CC module/fsdev/aio/linux_aio_mgr.o 00:05:04.258 CC module/scheduler/gscheduler/gscheduler.o 00:05:04.258 CC module/accel/dsa/accel_dsa_rpc.o 00:05:04.258 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:04.258 CC module/accel/iaa/accel_iaa.o 00:05:04.258 CC module/accel/iaa/accel_iaa_rpc.o 00:05:04.516 LIB libspdk_env_dpdk_rpc.a 00:05:04.516 SO libspdk_env_dpdk_rpc.so.6.0 00:05:04.516 SYMLINK libspdk_env_dpdk_rpc.so 00:05:04.516 LIB libspdk_keyring_file.a 00:05:04.516 LIB libspdk_scheduler_gscheduler.a 00:05:04.516 LIB libspdk_scheduler_dpdk_governor.a 00:05:04.516 SO libspdk_keyring_file.so.2.0 00:05:04.516 SO libspdk_scheduler_gscheduler.so.4.0 00:05:04.516 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:04.516 LIB libspdk_accel_ioat.a 00:05:04.516 LIB libspdk_scheduler_dynamic.a 00:05:04.516 LIB libspdk_accel_error.a 00:05:04.516 LIB libspdk_accel_iaa.a 00:05:04.516 SO libspdk_accel_ioat.so.6.0 00:05:04.773 SYMLINK libspdk_scheduler_gscheduler.so 00:05:04.773 LIB libspdk_keyring_linux.a 00:05:04.773 SYMLINK libspdk_keyring_file.so 00:05:04.773 SO libspdk_scheduler_dynamic.so.4.0 00:05:04.773 SO libspdk_accel_error.so.2.0 00:05:04.773 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:04.773 SO libspdk_accel_iaa.so.3.0 00:05:04.773 SO libspdk_keyring_linux.so.1.0 00:05:04.773 SYMLINK libspdk_accel_ioat.so 00:05:04.773 SYMLINK libspdk_scheduler_dynamic.so 00:05:04.773 SYMLINK libspdk_accel_error.so 00:05:04.773 LIB libspdk_blob_bdev.a 00:05:04.773 SYMLINK libspdk_accel_iaa.so 00:05:04.773 SYMLINK libspdk_keyring_linux.so 00:05:04.773 SO libspdk_blob_bdev.so.12.0 00:05:04.773 LIB libspdk_accel_dsa.a 00:05:04.773 SO libspdk_accel_dsa.so.5.0 00:05:04.773 SYMLINK libspdk_blob_bdev.so 00:05:04.773 SYMLINK libspdk_accel_dsa.so 00:05:05.033 LIB libspdk_vfu_device.a 00:05:05.033 SO libspdk_vfu_device.so.3.0 00:05:05.033 CC module/bdev/error/vbdev_error.o 00:05:05.033 CC module/bdev/delay/vbdev_delay.o 00:05:05.033 CC module/bdev/error/vbdev_error_rpc.o 00:05:05.033 CC module/bdev/passthru/vbdev_passthru.o 00:05:05.033 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:05.033 CC module/blobfs/bdev/blobfs_bdev.o 00:05:05.033 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:05.033 CC module/bdev/gpt/gpt.o 00:05:05.033 CC module/bdev/gpt/vbdev_gpt.o 00:05:05.033 CC module/bdev/malloc/bdev_malloc.o 00:05:05.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:05.033 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:05.033 CC module/bdev/lvol/vbdev_lvol.o 00:05:05.033 CC module/bdev/split/vbdev_split.o 00:05:05.033 CC module/bdev/ftl/bdev_ftl.o 00:05:05.033 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:05.033 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:05.033 CC module/bdev/null/bdev_null.o 00:05:05.033 CC module/bdev/split/vbdev_split_rpc.o 00:05:05.033 CC module/bdev/raid/bdev_raid.o 00:05:05.033 CC module/bdev/aio/bdev_aio.o 00:05:05.033 CC module/bdev/iscsi/bdev_iscsi.o 00:05:05.033 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:05.033 CC module/bdev/raid/bdev_raid_sb.o 00:05:05.033 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:05.033 CC module/bdev/aio/bdev_aio_rpc.o 00:05:05.033 CC module/bdev/raid/bdev_raid_rpc.o 00:05:05.033 CC module/bdev/null/bdev_null_rpc.o 00:05:05.033 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:05.033 CC module/bdev/raid/raid0.o 00:05:05.033 CC module/bdev/nvme/bdev_nvme.o 00:05:05.033 CC module/bdev/raid/raid1.o 00:05:05.033 CC module/bdev/raid/concat.o 00:05:05.033 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:05.033 CC module/bdev/nvme/nvme_rpc.o 00:05:05.033 CC module/bdev/nvme/bdev_mdns_client.o 00:05:05.033 CC module/bdev/nvme/vbdev_opal.o 00:05:05.033 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:05.033 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:05.033 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:05.033 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:05.033 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:05.033 LIB libspdk_fsdev_aio.a 00:05:05.033 SYMLINK libspdk_vfu_device.so 00:05:05.291 SO libspdk_fsdev_aio.so.1.0 00:05:05.291 SYMLINK libspdk_fsdev_aio.so 00:05:05.549 LIB libspdk_sock_posix.a 00:05:05.549 LIB libspdk_blobfs_bdev.a 00:05:05.549 SO libspdk_sock_posix.so.6.0 00:05:05.549 SO libspdk_blobfs_bdev.so.6.0 00:05:05.549 LIB libspdk_bdev_split.a 00:05:05.549 SO libspdk_bdev_split.so.6.0 00:05:05.549 LIB libspdk_bdev_null.a 00:05:05.549 SYMLINK libspdk_blobfs_bdev.so 00:05:05.549 LIB libspdk_bdev_gpt.a 00:05:05.549 SYMLINK libspdk_sock_posix.so 00:05:05.549 LIB libspdk_bdev_ftl.a 00:05:05.549 SO libspdk_bdev_null.so.6.0 00:05:05.549 LIB libspdk_bdev_passthru.a 00:05:05.549 LIB libspdk_bdev_error.a 00:05:05.549 SO libspdk_bdev_gpt.so.6.0 00:05:05.549 SYMLINK libspdk_bdev_split.so 00:05:05.549 SO libspdk_bdev_ftl.so.6.0 00:05:05.549 SO libspdk_bdev_passthru.so.6.0 00:05:05.549 SO libspdk_bdev_error.so.6.0 00:05:05.549 SYMLINK libspdk_bdev_null.so 00:05:05.549 LIB libspdk_bdev_malloc.a 00:05:05.549 LIB libspdk_bdev_delay.a 00:05:05.549 SYMLINK libspdk_bdev_gpt.so 00:05:05.549 SYMLINK libspdk_bdev_ftl.so 00:05:05.549 SO libspdk_bdev_malloc.so.6.0 00:05:05.549 SO libspdk_bdev_delay.so.6.0 00:05:05.549 SYMLINK libspdk_bdev_passthru.so 00:05:05.549 LIB libspdk_bdev_zone_block.a 00:05:05.549 SYMLINK libspdk_bdev_error.so 00:05:05.549 LIB libspdk_bdev_aio.a 00:05:05.805 SO libspdk_bdev_zone_block.so.6.0 00:05:05.805 SO libspdk_bdev_aio.so.6.0 00:05:05.805 LIB libspdk_bdev_iscsi.a 00:05:05.805 SYMLINK libspdk_bdev_delay.so 00:05:05.805 SYMLINK libspdk_bdev_malloc.so 00:05:05.805 SYMLINK libspdk_bdev_zone_block.so 00:05:05.805 SO libspdk_bdev_iscsi.so.6.0 00:05:05.805 SYMLINK libspdk_bdev_aio.so 00:05:05.805 SYMLINK libspdk_bdev_iscsi.so 00:05:05.805 LIB libspdk_bdev_lvol.a 00:05:05.805 SO libspdk_bdev_lvol.so.6.0 00:05:05.805 LIB libspdk_bdev_virtio.a 00:05:05.805 SO libspdk_bdev_virtio.so.6.0 00:05:05.805 SYMLINK libspdk_bdev_lvol.so 00:05:06.063 SYMLINK libspdk_bdev_virtio.so 00:05:06.321 LIB libspdk_bdev_raid.a 00:05:06.321 SO libspdk_bdev_raid.so.6.0 00:05:06.578 SYMLINK libspdk_bdev_raid.so 00:05:07.949 LIB libspdk_bdev_nvme.a 00:05:07.949 SO libspdk_bdev_nvme.so.7.1 00:05:07.949 SYMLINK libspdk_bdev_nvme.so 00:05:08.207 CC module/event/subsystems/iobuf/iobuf.o 00:05:08.207 CC module/event/subsystems/vmd/vmd.o 00:05:08.207 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:08.207 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:08.207 CC module/event/subsystems/scheduler/scheduler.o 00:05:08.207 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:08.207 CC module/event/subsystems/fsdev/fsdev.o 00:05:08.207 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:08.207 CC module/event/subsystems/keyring/keyring.o 00:05:08.207 CC module/event/subsystems/sock/sock.o 00:05:08.466 LIB libspdk_event_keyring.a 00:05:08.466 LIB libspdk_event_vhost_blk.a 00:05:08.466 LIB libspdk_event_scheduler.a 00:05:08.466 LIB libspdk_event_fsdev.a 00:05:08.466 LIB libspdk_event_vfu_tgt.a 00:05:08.466 LIB libspdk_event_vmd.a 00:05:08.466 LIB libspdk_event_sock.a 00:05:08.466 SO libspdk_event_vhost_blk.so.3.0 00:05:08.466 SO libspdk_event_keyring.so.1.0 00:05:08.466 LIB libspdk_event_iobuf.a 00:05:08.466 SO libspdk_event_fsdev.so.1.0 00:05:08.466 SO libspdk_event_scheduler.so.4.0 00:05:08.466 SO libspdk_event_vfu_tgt.so.3.0 00:05:08.466 SO libspdk_event_sock.so.5.0 00:05:08.466 SO libspdk_event_vmd.so.6.0 00:05:08.466 SO libspdk_event_iobuf.so.3.0 00:05:08.466 SYMLINK libspdk_event_keyring.so 00:05:08.466 SYMLINK libspdk_event_vhost_blk.so 00:05:08.466 SYMLINK libspdk_event_fsdev.so 00:05:08.466 SYMLINK libspdk_event_vfu_tgt.so 00:05:08.466 SYMLINK libspdk_event_scheduler.so 00:05:08.466 SYMLINK libspdk_event_sock.so 00:05:08.466 SYMLINK libspdk_event_vmd.so 00:05:08.466 SYMLINK libspdk_event_iobuf.so 00:05:08.723 CC module/event/subsystems/accel/accel.o 00:05:08.723 LIB libspdk_event_accel.a 00:05:08.981 SO libspdk_event_accel.so.6.0 00:05:08.981 SYMLINK libspdk_event_accel.so 00:05:09.239 CC module/event/subsystems/bdev/bdev.o 00:05:09.239 LIB libspdk_event_bdev.a 00:05:09.239 SO libspdk_event_bdev.so.6.0 00:05:09.239 SYMLINK libspdk_event_bdev.so 00:05:09.497 CC module/event/subsystems/ublk/ublk.o 00:05:09.497 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:09.497 CC module/event/subsystems/nbd/nbd.o 00:05:09.497 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:09.497 CC module/event/subsystems/scsi/scsi.o 00:05:09.755 LIB libspdk_event_ublk.a 00:05:09.755 LIB libspdk_event_nbd.a 00:05:09.755 LIB libspdk_event_scsi.a 00:05:09.755 SO libspdk_event_nbd.so.6.0 00:05:09.755 SO libspdk_event_ublk.so.3.0 00:05:09.755 SO libspdk_event_scsi.so.6.0 00:05:09.755 SYMLINK libspdk_event_nbd.so 00:05:09.755 SYMLINK libspdk_event_ublk.so 00:05:09.755 SYMLINK libspdk_event_scsi.so 00:05:09.755 LIB libspdk_event_nvmf.a 00:05:09.755 SO libspdk_event_nvmf.so.6.0 00:05:09.755 SYMLINK libspdk_event_nvmf.so 00:05:10.013 CC module/event/subsystems/iscsi/iscsi.o 00:05:10.013 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:10.013 LIB libspdk_event_vhost_scsi.a 00:05:10.013 LIB libspdk_event_iscsi.a 00:05:10.013 SO libspdk_event_vhost_scsi.so.3.0 00:05:10.013 SO libspdk_event_iscsi.so.6.0 00:05:10.271 SYMLINK libspdk_event_vhost_scsi.so 00:05:10.272 SYMLINK libspdk_event_iscsi.so 00:05:10.272 SO libspdk.so.6.0 00:05:10.272 SYMLINK libspdk.so 00:05:10.533 CC test/rpc_client/rpc_client_test.o 00:05:10.533 CXX app/trace/trace.o 00:05:10.533 TEST_HEADER include/spdk/accel.h 00:05:10.533 CC app/trace_record/trace_record.o 00:05:10.533 TEST_HEADER include/spdk/accel_module.h 00:05:10.533 TEST_HEADER include/spdk/assert.h 00:05:10.533 TEST_HEADER include/spdk/barrier.h 00:05:10.533 TEST_HEADER include/spdk/base64.h 00:05:10.533 TEST_HEADER include/spdk/bdev.h 00:05:10.533 CC app/spdk_nvme_identify/identify.o 00:05:10.533 TEST_HEADER include/spdk/bdev_module.h 00:05:10.533 TEST_HEADER include/spdk/bdev_zone.h 00:05:10.533 CC app/spdk_lspci/spdk_lspci.o 00:05:10.533 CC app/spdk_nvme_discover/discovery_aer.o 00:05:10.533 TEST_HEADER include/spdk/bit_array.h 00:05:10.533 TEST_HEADER include/spdk/bit_pool.h 00:05:10.533 TEST_HEADER include/spdk/blob_bdev.h 00:05:10.533 CC app/spdk_nvme_perf/perf.o 00:05:10.533 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:10.533 TEST_HEADER include/spdk/blobfs.h 00:05:10.533 CC app/spdk_top/spdk_top.o 00:05:10.533 TEST_HEADER include/spdk/blob.h 00:05:10.533 TEST_HEADER include/spdk/conf.h 00:05:10.533 TEST_HEADER include/spdk/config.h 00:05:10.533 TEST_HEADER include/spdk/cpuset.h 00:05:10.533 TEST_HEADER include/spdk/crc16.h 00:05:10.533 TEST_HEADER include/spdk/crc32.h 00:05:10.533 TEST_HEADER include/spdk/crc64.h 00:05:10.533 TEST_HEADER include/spdk/dif.h 00:05:10.533 TEST_HEADER include/spdk/dma.h 00:05:10.533 TEST_HEADER include/spdk/endian.h 00:05:10.533 TEST_HEADER include/spdk/env_dpdk.h 00:05:10.533 TEST_HEADER include/spdk/env.h 00:05:10.533 TEST_HEADER include/spdk/event.h 00:05:10.533 TEST_HEADER include/spdk/fd_group.h 00:05:10.533 TEST_HEADER include/spdk/file.h 00:05:10.533 TEST_HEADER include/spdk/fd.h 00:05:10.533 TEST_HEADER include/spdk/fsdev.h 00:05:10.533 TEST_HEADER include/spdk/fsdev_module.h 00:05:10.533 TEST_HEADER include/spdk/ftl.h 00:05:10.533 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:10.533 TEST_HEADER include/spdk/gpt_spec.h 00:05:10.533 TEST_HEADER include/spdk/hexlify.h 00:05:10.533 TEST_HEADER include/spdk/histogram_data.h 00:05:10.533 TEST_HEADER include/spdk/idxd.h 00:05:10.533 TEST_HEADER include/spdk/idxd_spec.h 00:05:10.533 TEST_HEADER include/spdk/init.h 00:05:10.533 TEST_HEADER include/spdk/ioat.h 00:05:10.533 TEST_HEADER include/spdk/ioat_spec.h 00:05:10.533 TEST_HEADER include/spdk/iscsi_spec.h 00:05:10.533 TEST_HEADER include/spdk/json.h 00:05:10.533 TEST_HEADER include/spdk/jsonrpc.h 00:05:10.533 TEST_HEADER include/spdk/keyring.h 00:05:10.533 TEST_HEADER include/spdk/keyring_module.h 00:05:10.533 TEST_HEADER include/spdk/likely.h 00:05:10.533 TEST_HEADER include/spdk/log.h 00:05:10.533 TEST_HEADER include/spdk/lvol.h 00:05:10.533 TEST_HEADER include/spdk/md5.h 00:05:10.533 TEST_HEADER include/spdk/memory.h 00:05:10.533 TEST_HEADER include/spdk/mmio.h 00:05:10.533 TEST_HEADER include/spdk/nbd.h 00:05:10.533 TEST_HEADER include/spdk/net.h 00:05:10.533 TEST_HEADER include/spdk/notify.h 00:05:10.533 TEST_HEADER include/spdk/nvme.h 00:05:10.533 TEST_HEADER include/spdk/nvme_intel.h 00:05:10.533 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:10.533 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:10.533 TEST_HEADER include/spdk/nvme_spec.h 00:05:10.533 TEST_HEADER include/spdk/nvme_zns.h 00:05:10.533 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:10.533 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:10.533 TEST_HEADER include/spdk/nvmf.h 00:05:10.533 TEST_HEADER include/spdk/nvmf_spec.h 00:05:10.533 TEST_HEADER include/spdk/opal.h 00:05:10.533 TEST_HEADER include/spdk/nvmf_transport.h 00:05:10.533 TEST_HEADER include/spdk/pci_ids.h 00:05:10.533 TEST_HEADER include/spdk/opal_spec.h 00:05:10.533 TEST_HEADER include/spdk/pipe.h 00:05:10.533 TEST_HEADER include/spdk/queue.h 00:05:10.533 TEST_HEADER include/spdk/rpc.h 00:05:10.533 TEST_HEADER include/spdk/reduce.h 00:05:10.533 TEST_HEADER include/spdk/scsi.h 00:05:10.533 TEST_HEADER include/spdk/scheduler.h 00:05:10.533 TEST_HEADER include/spdk/scsi_spec.h 00:05:10.533 TEST_HEADER include/spdk/stdinc.h 00:05:10.533 TEST_HEADER include/spdk/sock.h 00:05:10.533 TEST_HEADER include/spdk/string.h 00:05:10.533 TEST_HEADER include/spdk/thread.h 00:05:10.533 TEST_HEADER include/spdk/trace_parser.h 00:05:10.533 TEST_HEADER include/spdk/trace.h 00:05:10.533 TEST_HEADER include/spdk/tree.h 00:05:10.533 TEST_HEADER include/spdk/ublk.h 00:05:10.533 TEST_HEADER include/spdk/util.h 00:05:10.533 TEST_HEADER include/spdk/uuid.h 00:05:10.533 TEST_HEADER include/spdk/version.h 00:05:10.533 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:10.534 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:10.534 TEST_HEADER include/spdk/vhost.h 00:05:10.534 TEST_HEADER include/spdk/vmd.h 00:05:10.534 TEST_HEADER include/spdk/xor.h 00:05:10.534 TEST_HEADER include/spdk/zipf.h 00:05:10.534 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:10.534 CXX test/cpp_headers/accel.o 00:05:10.534 CXX test/cpp_headers/accel_module.o 00:05:10.534 CXX test/cpp_headers/assert.o 00:05:10.534 CXX test/cpp_headers/barrier.o 00:05:10.534 CXX test/cpp_headers/base64.o 00:05:10.534 CXX test/cpp_headers/bdev.o 00:05:10.534 CXX test/cpp_headers/bdev_module.o 00:05:10.534 CXX test/cpp_headers/bdev_zone.o 00:05:10.534 CXX test/cpp_headers/bit_array.o 00:05:10.534 CXX test/cpp_headers/bit_pool.o 00:05:10.534 CXX test/cpp_headers/blob_bdev.o 00:05:10.534 CXX test/cpp_headers/blobfs_bdev.o 00:05:10.534 CXX test/cpp_headers/blobfs.o 00:05:10.534 CXX test/cpp_headers/blob.o 00:05:10.534 CXX test/cpp_headers/conf.o 00:05:10.534 CXX test/cpp_headers/config.o 00:05:10.534 CXX test/cpp_headers/cpuset.o 00:05:10.534 CXX test/cpp_headers/crc16.o 00:05:10.534 CC app/spdk_dd/spdk_dd.o 00:05:10.534 CC app/iscsi_tgt/iscsi_tgt.o 00:05:10.534 CC app/nvmf_tgt/nvmf_main.o 00:05:10.534 CXX test/cpp_headers/crc32.o 00:05:10.534 CC test/app/histogram_perf/histogram_perf.o 00:05:10.534 CC examples/util/zipf/zipf.o 00:05:10.534 CC examples/ioat/verify/verify.o 00:05:10.534 CC test/app/jsoncat/jsoncat.o 00:05:10.534 CC examples/ioat/perf/perf.o 00:05:10.534 CC test/thread/poller_perf/poller_perf.o 00:05:10.534 CC test/app/stub/stub.o 00:05:10.534 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:10.534 CC app/fio/nvme/fio_plugin.o 00:05:10.534 CC app/spdk_tgt/spdk_tgt.o 00:05:10.534 CC test/env/pci/pci_ut.o 00:05:10.534 CC test/env/memory/memory_ut.o 00:05:10.534 CC test/env/vtophys/vtophys.o 00:05:10.794 CC test/dma/test_dma/test_dma.o 00:05:10.794 CC test/app/bdev_svc/bdev_svc.o 00:05:10.794 CC app/fio/bdev/fio_plugin.o 00:05:10.794 CC test/env/mem_callbacks/mem_callbacks.o 00:05:10.794 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:10.794 LINK spdk_lspci 00:05:10.794 LINK rpc_client_test 00:05:11.054 LINK spdk_nvme_discover 00:05:11.054 LINK jsoncat 00:05:11.054 LINK histogram_perf 00:05:11.054 LINK interrupt_tgt 00:05:11.054 LINK poller_perf 00:05:11.054 CXX test/cpp_headers/crc64.o 00:05:11.054 LINK nvmf_tgt 00:05:11.054 CXX test/cpp_headers/dif.o 00:05:11.054 LINK zipf 00:05:11.054 CXX test/cpp_headers/dma.o 00:05:11.054 LINK spdk_trace_record 00:05:11.054 LINK env_dpdk_post_init 00:05:11.054 LINK vtophys 00:05:11.054 CXX test/cpp_headers/endian.o 00:05:11.054 CXX test/cpp_headers/env_dpdk.o 00:05:11.054 CXX test/cpp_headers/env.o 00:05:11.054 CXX test/cpp_headers/event.o 00:05:11.054 CXX test/cpp_headers/fd_group.o 00:05:11.054 CXX test/cpp_headers/fd.o 00:05:11.054 CXX test/cpp_headers/file.o 00:05:11.054 LINK iscsi_tgt 00:05:11.054 CXX test/cpp_headers/fsdev.o 00:05:11.054 LINK stub 00:05:11.054 CXX test/cpp_headers/fsdev_module.o 00:05:11.054 CXX test/cpp_headers/ftl.o 00:05:11.054 CXX test/cpp_headers/fuse_dispatcher.o 00:05:11.054 CXX test/cpp_headers/gpt_spec.o 00:05:11.054 CXX test/cpp_headers/hexlify.o 00:05:11.054 CXX test/cpp_headers/histogram_data.o 00:05:11.054 LINK ioat_perf 00:05:11.054 LINK spdk_tgt 00:05:11.054 LINK bdev_svc 00:05:11.054 LINK verify 00:05:11.054 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:11.054 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:11.054 CXX test/cpp_headers/idxd.o 00:05:11.317 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:11.317 CXX test/cpp_headers/idxd_spec.o 00:05:11.317 CXX test/cpp_headers/init.o 00:05:11.317 CXX test/cpp_headers/ioat.o 00:05:11.317 CXX test/cpp_headers/ioat_spec.o 00:05:11.317 CXX test/cpp_headers/iscsi_spec.o 00:05:11.317 CXX test/cpp_headers/json.o 00:05:11.317 CXX test/cpp_headers/jsonrpc.o 00:05:11.317 LINK spdk_dd 00:05:11.317 CXX test/cpp_headers/keyring.o 00:05:11.317 CXX test/cpp_headers/keyring_module.o 00:05:11.317 LINK spdk_trace 00:05:11.317 CXX test/cpp_headers/likely.o 00:05:11.317 CXX test/cpp_headers/log.o 00:05:11.317 CXX test/cpp_headers/lvol.o 00:05:11.317 CXX test/cpp_headers/md5.o 00:05:11.317 CXX test/cpp_headers/memory.o 00:05:11.317 CXX test/cpp_headers/mmio.o 00:05:11.317 LINK pci_ut 00:05:11.317 CXX test/cpp_headers/nbd.o 00:05:11.317 CXX test/cpp_headers/net.o 00:05:11.317 CXX test/cpp_headers/notify.o 00:05:11.582 CXX test/cpp_headers/nvme.o 00:05:11.582 CXX test/cpp_headers/nvme_intel.o 00:05:11.582 CXX test/cpp_headers/nvme_ocssd.o 00:05:11.582 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:11.582 CXX test/cpp_headers/nvme_spec.o 00:05:11.582 CXX test/cpp_headers/nvme_zns.o 00:05:11.582 CXX test/cpp_headers/nvmf_cmd.o 00:05:11.582 LINK nvme_fuzz 00:05:11.582 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:11.582 CC test/event/event_perf/event_perf.o 00:05:11.582 CXX test/cpp_headers/nvmf.o 00:05:11.582 CC test/event/reactor/reactor.o 00:05:11.582 CXX test/cpp_headers/nvmf_spec.o 00:05:11.582 CC test/event/reactor_perf/reactor_perf.o 00:05:11.582 CC test/event/app_repeat/app_repeat.o 00:05:11.582 CXX test/cpp_headers/nvmf_transport.o 00:05:11.582 CXX test/cpp_headers/opal.o 00:05:11.582 LINK test_dma 00:05:11.582 CC examples/sock/hello_world/hello_sock.o 00:05:11.848 CXX test/cpp_headers/opal_spec.o 00:05:11.848 CC examples/idxd/perf/perf.o 00:05:11.848 CC examples/vmd/lsvmd/lsvmd.o 00:05:11.848 CXX test/cpp_headers/pci_ids.o 00:05:11.848 CXX test/cpp_headers/pipe.o 00:05:11.848 CC test/event/scheduler/scheduler.o 00:05:11.848 LINK spdk_nvme 00:05:11.848 LINK spdk_bdev 00:05:11.848 CXX test/cpp_headers/queue.o 00:05:11.848 CC examples/thread/thread/thread_ex.o 00:05:11.848 CXX test/cpp_headers/reduce.o 00:05:11.848 CXX test/cpp_headers/rpc.o 00:05:11.848 CXX test/cpp_headers/scheduler.o 00:05:11.848 CC examples/vmd/led/led.o 00:05:11.848 CXX test/cpp_headers/scsi.o 00:05:11.848 CXX test/cpp_headers/scsi_spec.o 00:05:11.848 CXX test/cpp_headers/sock.o 00:05:11.848 CXX test/cpp_headers/stdinc.o 00:05:11.848 CXX test/cpp_headers/string.o 00:05:11.848 CXX test/cpp_headers/thread.o 00:05:11.848 CXX test/cpp_headers/trace.o 00:05:11.848 CXX test/cpp_headers/trace_parser.o 00:05:11.848 CXX test/cpp_headers/tree.o 00:05:11.848 CXX test/cpp_headers/ublk.o 00:05:11.848 CXX test/cpp_headers/util.o 00:05:11.848 CXX test/cpp_headers/uuid.o 00:05:11.848 LINK reactor 00:05:11.848 CXX test/cpp_headers/version.o 00:05:11.848 LINK event_perf 00:05:11.848 CXX test/cpp_headers/vfio_user_pci.o 00:05:11.848 LINK reactor_perf 00:05:12.106 CXX test/cpp_headers/vfio_user_spec.o 00:05:12.106 CXX test/cpp_headers/vhost.o 00:05:12.106 CXX test/cpp_headers/vmd.o 00:05:12.106 CXX test/cpp_headers/xor.o 00:05:12.106 LINK mem_callbacks 00:05:12.106 LINK lsvmd 00:05:12.106 CXX test/cpp_headers/zipf.o 00:05:12.106 LINK app_repeat 00:05:12.106 LINK spdk_nvme_perf 00:05:12.106 LINK vhost_fuzz 00:05:12.106 CC app/vhost/vhost.o 00:05:12.106 LINK led 00:05:12.106 LINK spdk_nvme_identify 00:05:12.106 LINK scheduler 00:05:12.106 LINK spdk_top 00:05:12.106 LINK hello_sock 00:05:12.365 LINK thread 00:05:12.365 CC test/nvme/compliance/nvme_compliance.o 00:05:12.365 CC test/nvme/err_injection/err_injection.o 00:05:12.365 CC test/nvme/reset/reset.o 00:05:12.365 CC test/nvme/sgl/sgl.o 00:05:12.365 CC test/nvme/simple_copy/simple_copy.o 00:05:12.365 CC test/nvme/aer/aer.o 00:05:12.365 CC test/nvme/reserve/reserve.o 00:05:12.365 CC test/nvme/boot_partition/boot_partition.o 00:05:12.365 CC test/nvme/overhead/overhead.o 00:05:12.365 CC test/nvme/e2edp/nvme_dp.o 00:05:12.365 CC test/nvme/connect_stress/connect_stress.o 00:05:12.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:12.365 CC test/nvme/fdp/fdp.o 00:05:12.365 CC test/nvme/startup/startup.o 00:05:12.365 CC test/nvme/fused_ordering/fused_ordering.o 00:05:12.365 CC test/nvme/cuse/cuse.o 00:05:12.365 LINK idxd_perf 00:05:12.365 CC test/accel/dif/dif.o 00:05:12.365 CC test/blobfs/mkfs/mkfs.o 00:05:12.365 LINK vhost 00:05:12.365 CC test/lvol/esnap/esnap.o 00:05:12.623 LINK boot_partition 00:05:12.623 LINK connect_stress 00:05:12.623 LINK doorbell_aers 00:05:12.623 LINK startup 00:05:12.623 LINK simple_copy 00:05:12.623 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:12.623 CC examples/nvme/arbitration/arbitration.o 00:05:12.623 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:12.623 CC examples/nvme/hello_world/hello_world.o 00:05:12.623 CC examples/nvme/abort/abort.o 00:05:12.623 CC examples/nvme/hotplug/hotplug.o 00:05:12.623 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:12.623 CC examples/nvme/reconnect/reconnect.o 00:05:12.623 LINK mkfs 00:05:12.623 LINK reserve 00:05:12.623 LINK err_injection 00:05:12.623 LINK sgl 00:05:12.623 CC examples/accel/perf/accel_perf.o 00:05:12.623 LINK fused_ordering 00:05:12.901 LINK overhead 00:05:12.901 LINK nvme_compliance 00:05:12.901 CC examples/blob/cli/blobcli.o 00:05:12.901 LINK fdp 00:05:12.901 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:12.901 CC examples/blob/hello_world/hello_blob.o 00:05:12.901 LINK reset 00:05:12.901 LINK memory_ut 00:05:12.901 LINK nvme_dp 00:05:12.901 LINK aer 00:05:12.901 LINK cmb_copy 00:05:12.901 LINK pmr_persistence 00:05:13.159 LINK hotplug 00:05:13.159 LINK hello_world 00:05:13.159 LINK hello_blob 00:05:13.159 LINK arbitration 00:05:13.159 LINK abort 00:05:13.159 LINK reconnect 00:05:13.159 LINK hello_fsdev 00:05:13.417 LINK nvme_manage 00:05:13.417 LINK accel_perf 00:05:13.417 LINK dif 00:05:13.417 LINK blobcli 00:05:13.675 LINK iscsi_fuzz 00:05:13.675 CC examples/bdev/hello_world/hello_bdev.o 00:05:13.675 CC examples/bdev/bdevperf/bdevperf.o 00:05:13.675 CC test/bdev/bdevio/bdevio.o 00:05:13.933 LINK hello_bdev 00:05:13.933 LINK cuse 00:05:14.191 LINK bdevio 00:05:14.449 LINK bdevperf 00:05:15.014 CC examples/nvmf/nvmf/nvmf.o 00:05:15.272 LINK nvmf 00:05:17.870 LINK esnap 00:05:18.128 00:05:18.128 real 1m10.059s 00:05:18.128 user 11m50.833s 00:05:18.128 sys 2m36.837s 00:05:18.128 19:03:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:18.128 19:03:28 make -- common/autotest_common.sh@10 -- $ set +x 00:05:18.128 ************************************ 00:05:18.128 END TEST make 00:05:18.128 ************************************ 00:05:18.128 19:03:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:18.128 19:03:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:18.128 19:03:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:18.128 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.128 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:18.128 19:03:28 -- pm/common@44 -- $ pid=918491 00:05:18.128 19:03:28 -- pm/common@50 -- $ kill -TERM 918491 00:05:18.128 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.128 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:18.128 19:03:28 -- pm/common@44 -- $ pid=918493 00:05:18.128 19:03:28 -- pm/common@50 -- $ kill -TERM 918493 00:05:18.128 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.128 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:18.128 19:03:28 -- pm/common@44 -- $ pid=918495 00:05:18.128 19:03:28 -- pm/common@50 -- $ kill -TERM 918495 00:05:18.128 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.128 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:18.128 19:03:28 -- pm/common@44 -- $ pid=918527 00:05:18.128 19:03:28 -- pm/common@50 -- $ sudo -E kill -TERM 918527 00:05:18.128 19:03:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:18.128 19:03:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:18.128 19:03:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.128 19:03:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.128 19:03:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.128 19:03:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.128 19:03:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.128 19:03:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.128 19:03:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.128 19:03:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.128 19:03:28 -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.128 19:03:28 -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.128 19:03:28 -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.128 19:03:28 -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.128 19:03:28 -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.128 19:03:28 -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.128 19:03:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.128 19:03:28 -- scripts/common.sh@344 -- # case "$op" in 00:05:18.128 19:03:28 -- scripts/common.sh@345 -- # : 1 00:05:18.128 19:03:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.128 19:03:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.128 19:03:28 -- scripts/common.sh@365 -- # decimal 1 00:05:18.128 19:03:28 -- scripts/common.sh@353 -- # local d=1 00:05:18.128 19:03:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.128 19:03:28 -- scripts/common.sh@355 -- # echo 1 00:05:18.128 19:03:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.128 19:03:28 -- scripts/common.sh@366 -- # decimal 2 00:05:18.128 19:03:28 -- scripts/common.sh@353 -- # local d=2 00:05:18.128 19:03:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.128 19:03:28 -- scripts/common.sh@355 -- # echo 2 00:05:18.128 19:03:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.128 19:03:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.128 19:03:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.128 19:03:28 -- scripts/common.sh@368 -- # return 0 00:05:18.128 19:03:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.128 19:03:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.128 --rc genhtml_branch_coverage=1 00:05:18.128 --rc genhtml_function_coverage=1 00:05:18.128 --rc genhtml_legend=1 00:05:18.128 --rc geninfo_all_blocks=1 00:05:18.128 --rc geninfo_unexecuted_blocks=1 00:05:18.128 00:05:18.128 ' 00:05:18.128 19:03:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.128 --rc genhtml_branch_coverage=1 00:05:18.128 --rc genhtml_function_coverage=1 00:05:18.128 --rc genhtml_legend=1 00:05:18.128 --rc geninfo_all_blocks=1 00:05:18.128 --rc geninfo_unexecuted_blocks=1 00:05:18.128 00:05:18.128 ' 00:05:18.128 19:03:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.128 --rc genhtml_branch_coverage=1 00:05:18.128 --rc genhtml_function_coverage=1 00:05:18.128 --rc genhtml_legend=1 00:05:18.128 --rc geninfo_all_blocks=1 00:05:18.128 --rc geninfo_unexecuted_blocks=1 00:05:18.128 00:05:18.128 ' 00:05:18.128 19:03:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.128 --rc genhtml_branch_coverage=1 00:05:18.128 --rc genhtml_function_coverage=1 00:05:18.128 --rc genhtml_legend=1 00:05:18.128 --rc geninfo_all_blocks=1 00:05:18.128 --rc geninfo_unexecuted_blocks=1 00:05:18.128 00:05:18.128 ' 00:05:18.128 19:03:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.128 19:03:28 -- nvmf/common.sh@7 -- # uname -s 00:05:18.388 19:03:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.388 19:03:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.388 19:03:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.388 19:03:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.388 19:03:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.388 19:03:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.388 19:03:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.388 19:03:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.388 19:03:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.388 19:03:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.388 19:03:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.388 19:03:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.388 19:03:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.388 19:03:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.388 19:03:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.388 19:03:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.388 19:03:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.388 19:03:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.388 19:03:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.388 19:03:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.388 19:03:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.388 19:03:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.388 19:03:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.388 19:03:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.388 19:03:28 -- paths/export.sh@5 -- # export PATH 00:05:18.388 19:03:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.388 19:03:28 -- nvmf/common.sh@51 -- # : 0 00:05:18.388 19:03:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.388 19:03:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.388 19:03:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.388 19:03:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.388 19:03:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.388 19:03:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.388 19:03:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.388 19:03:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.388 19:03:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.388 19:03:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:18.388 19:03:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:18.388 19:03:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:18.388 19:03:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:18.388 19:03:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:18.388 19:03:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:18.388 19:03:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:18.388 19:03:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:18.388 19:03:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:18.388 19:03:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:18.388 19:03:28 -- spdk/autotest.sh@48 -- # udevadm_pid=978571 00:05:18.388 19:03:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:18.388 19:03:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:18.388 19:03:28 -- pm/common@17 -- # local monitor 00:05:18.388 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.388 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.388 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.388 19:03:28 -- pm/common@21 -- # date +%s 00:05:18.388 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.388 19:03:28 -- pm/common@21 -- # date +%s 00:05:18.388 19:03:28 -- pm/common@25 -- # sleep 1 00:05:18.388 19:03:28 -- pm/common@21 -- # date +%s 00:05:18.388 19:03:28 -- pm/common@21 -- # date +%s 00:05:18.388 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:05:18.388 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:05:18.388 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:05:18.388 19:03:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:05:18.388 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-cpu-load.pm.log 00:05:18.388 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-cpu-temp.pm.log 00:05:18.388 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-vmstat.pm.log 00:05:18.388 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-bmc-pm.bmc.pm.log 00:05:19.325 19:03:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:19.325 19:03:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:19.325 19:03:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.325 19:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:19.325 19:03:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:19.325 19:03:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:19.325 19:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:19.325 19:03:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:19.325 19:03:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.325 19:03:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.325 19:03:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:19.325 19:03:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.325 19:03:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:19.325 19:03:29 -- common/autotest_common.sh@1457 -- # uname 00:05:19.325 19:03:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:19.325 19:03:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:19.326 19:03:29 -- common/autotest_common.sh@1477 -- # uname 00:05:19.326 19:03:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:19.326 19:03:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:19.326 19:03:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:19.326 lcov: LCOV version 1.15 00:05:19.326 19:03:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:37.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:37.408 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:59.323 19:04:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:59.323 19:04:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.323 19:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.323 19:04:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:59.323 19:04:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.323 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:59.323 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:59.323 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:59.323 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:59.323 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:59.323 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:59.323 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:59.323 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:59.323 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:59.323 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:59.323 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:59.323 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:59.323 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:59.323 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:59.323 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:59.323 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:59.323 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:59.323 19:04:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:59.323 19:04:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:59.323 19:04:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:59.323 19:04:08 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:59.323 19:04:08 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:59.323 19:04:08 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:59.323 19:04:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:59.323 19:04:08 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:05:59.323 19:04:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:59.323 19:04:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:59.323 19:04:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:59.323 19:04:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:59.323 19:04:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:59.323 19:04:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:59.323 19:04:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:59.323 19:04:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:59.323 19:04:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:59.323 19:04:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:59.323 19:04:08 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:59.323 No valid GPT data, bailing 00:05:59.323 19:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:59.323 19:04:08 -- scripts/common.sh@394 -- # pt= 00:05:59.323 19:04:08 -- scripts/common.sh@395 -- # return 1 00:05:59.323 19:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:59.323 1+0 records in 00:05:59.323 1+0 records out 00:05:59.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00216989 s, 483 MB/s 00:05:59.323 19:04:08 -- spdk/autotest.sh@105 -- # sync 00:05:59.323 19:04:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:59.323 19:04:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:59.323 19:04:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:00.695 19:04:10 -- spdk/autotest.sh@111 -- # uname -s 00:06:00.695 19:04:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:00.695 19:04:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:00.695 19:04:10 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:01.626 Hugepages 00:06:01.626 node hugesize free / total 00:06:01.626 node0 1048576kB 0 / 0 00:06:01.626 node0 2048kB 0 / 0 00:06:01.626 node1 1048576kB 0 / 0 00:06:01.626 node1 2048kB 0 / 0 00:06:01.626 00:06:01.626 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:01.626 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:01.883 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:01.883 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:01.883 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:01.883 19:04:12 -- spdk/autotest.sh@117 -- # uname -s 00:06:01.883 19:04:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:01.883 19:04:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:01.883 19:04:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:03.257 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:03.257 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:03.257 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:04.193 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:04.451 19:04:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:05.387 19:04:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:05.387 19:04:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:05.387 19:04:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:05.387 19:04:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:05.387 19:04:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.387 19:04:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.387 19:04:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.387 19:04:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:05.387 19:04:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.387 19:04:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:05.387 19:04:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:06:05.387 19:04:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.763 Waiting for block devices as requested 00:06:06.763 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:06:06.763 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:06.763 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:07.022 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:07.022 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:07.022 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:07.022 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:07.281 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:07.281 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:07.281 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:07.281 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:07.540 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:07.540 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:07.540 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:07.799 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:07.799 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:07.799 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:08.061 19:04:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:08.061 19:04:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:06:08.061 19:04:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:06:08.061 19:04:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:08.061 19:04:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:08.061 19:04:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:08.061 19:04:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:08.061 19:04:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:08.061 19:04:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:08.061 19:04:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:08.061 19:04:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:08.061 19:04:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:08.061 19:04:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:08.061 19:04:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:08.061 19:04:18 -- common/autotest_common.sh@1543 -- # continue 00:06:08.061 19:04:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:08.061 19:04:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.061 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 19:04:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:08.061 19:04:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.061 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 19:04:18 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:09.437 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:09.437 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:09.437 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:10.378 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:10.378 19:04:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:10.378 19:04:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.378 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:06:10.378 19:04:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:10.378 19:04:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:10.378 19:04:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:10.378 19:04:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:10.378 19:04:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:10.378 19:04:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:10.378 19:04:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:10.378 19:04:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:10.378 19:04:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:10.378 19:04:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:10.378 19:04:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.378 19:04:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:10.378 19:04:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:10.378 19:04:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:10.378 19:04:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:06:10.378 19:04:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.378 19:04:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:06:10.378 19:04:20 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:10.378 19:04:20 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:10.378 19:04:20 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:10.378 19:04:20 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:10.378 19:04:20 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:06:10.378 19:04:20 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:06:10.378 19:04:20 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=989055 00:06:10.378 19:04:20 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.378 19:04:20 -- common/autotest_common.sh@1585 -- # waitforlisten 989055 00:06:10.378 19:04:20 -- common/autotest_common.sh@835 -- # '[' -z 989055 ']' 00:06:10.378 19:04:20 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.378 19:04:20 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.378 19:04:20 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.378 19:04:20 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.378 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:06:10.637 [2024-12-06 19:04:20.956802] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:10.637 [2024-12-06 19:04:20.956914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989055 ] 00:06:10.637 [2024-12-06 19:04:21.031466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.637 [2024-12-06 19:04:21.087751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.896 19:04:21 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.896 19:04:21 -- common/autotest_common.sh@868 -- # return 0 00:06:10.896 19:04:21 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:10.896 19:04:21 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:10.896 19:04:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:14.182 nvme0n1 00:06:14.182 19:04:24 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:14.182 [2024-12-06 19:04:24.699016] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:14.182 [2024-12-06 19:04:24.699063] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:14.182 request: 00:06:14.182 { 00:06:14.182 "nvme_ctrlr_name": "nvme0", 00:06:14.182 "password": "test", 00:06:14.182 "method": "bdev_nvme_opal_revert", 00:06:14.182 "req_id": 1 00:06:14.182 } 00:06:14.182 Got JSON-RPC error response 00:06:14.182 response: 00:06:14.182 { 00:06:14.182 "code": -32603, 00:06:14.182 "message": "Internal error" 00:06:14.182 } 00:06:14.182 19:04:24 -- common/autotest_common.sh@1591 -- # true 00:06:14.182 19:04:24 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:14.182 19:04:24 -- common/autotest_common.sh@1595 -- # killprocess 989055 00:06:14.182 19:04:24 -- common/autotest_common.sh@954 -- # '[' -z 989055 ']' 00:06:14.182 19:04:24 -- common/autotest_common.sh@958 -- # kill -0 989055 00:06:14.182 19:04:24 -- common/autotest_common.sh@959 -- # uname 00:06:14.182 19:04:24 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.182 19:04:24 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989055 00:06:14.182 19:04:24 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.182 19:04:24 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.182 19:04:24 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989055' 00:06:14.182 killing process with pid 989055 00:06:14.182 19:04:24 -- common/autotest_common.sh@973 -- # kill 989055 00:06:14.440 19:04:24 -- common/autotest_common.sh@978 -- # wait 989055 00:06:16.354 19:04:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:16.354 19:04:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:16.354 19:04:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:16.354 19:04:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:16.354 19:04:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:16.354 19:04:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.354 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:16.354 19:04:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:16.354 19:04:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:16.354 19:04:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.354 19:04:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.354 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:16.354 ************************************ 00:06:16.354 START TEST env 00:06:16.354 ************************************ 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:16.354 * Looking for test storage... 00:06:16.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.354 19:04:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.354 19:04:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.354 19:04:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.354 19:04:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.354 19:04:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.354 19:04:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.354 19:04:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.354 19:04:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.354 19:04:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.354 19:04:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.354 19:04:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.354 19:04:26 env -- scripts/common.sh@344 -- # case "$op" in 00:06:16.354 19:04:26 env -- scripts/common.sh@345 -- # : 1 00:06:16.354 19:04:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.354 19:04:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.354 19:04:26 env -- scripts/common.sh@365 -- # decimal 1 00:06:16.354 19:04:26 env -- scripts/common.sh@353 -- # local d=1 00:06:16.354 19:04:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.354 19:04:26 env -- scripts/common.sh@355 -- # echo 1 00:06:16.354 19:04:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.354 19:04:26 env -- scripts/common.sh@366 -- # decimal 2 00:06:16.354 19:04:26 env -- scripts/common.sh@353 -- # local d=2 00:06:16.354 19:04:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.354 19:04:26 env -- scripts/common.sh@355 -- # echo 2 00:06:16.354 19:04:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.354 19:04:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.354 19:04:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.354 19:04:26 env -- scripts/common.sh@368 -- # return 0 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.354 --rc genhtml_branch_coverage=1 00:06:16.354 --rc genhtml_function_coverage=1 00:06:16.354 --rc genhtml_legend=1 00:06:16.354 --rc geninfo_all_blocks=1 00:06:16.354 --rc geninfo_unexecuted_blocks=1 00:06:16.354 00:06:16.354 ' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.354 --rc genhtml_branch_coverage=1 00:06:16.354 --rc genhtml_function_coverage=1 00:06:16.354 --rc genhtml_legend=1 00:06:16.354 --rc geninfo_all_blocks=1 00:06:16.354 --rc geninfo_unexecuted_blocks=1 00:06:16.354 00:06:16.354 ' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.354 --rc genhtml_branch_coverage=1 00:06:16.354 --rc genhtml_function_coverage=1 00:06:16.354 --rc genhtml_legend=1 00:06:16.354 --rc geninfo_all_blocks=1 00:06:16.354 --rc geninfo_unexecuted_blocks=1 00:06:16.354 00:06:16.354 ' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.354 --rc genhtml_branch_coverage=1 00:06:16.354 --rc genhtml_function_coverage=1 00:06:16.354 --rc genhtml_legend=1 00:06:16.354 --rc geninfo_all_blocks=1 00:06:16.354 --rc geninfo_unexecuted_blocks=1 00:06:16.354 00:06:16.354 ' 00:06:16.354 19:04:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.354 19:04:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.354 19:04:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.354 ************************************ 00:06:16.354 START TEST env_memory 00:06:16.354 ************************************ 00:06:16.354 19:04:26 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.354 00:06:16.354 00:06:16.354 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.354 http://cunit.sourceforge.net/ 00:06:16.354 00:06:16.354 00:06:16.354 Suite: memory 00:06:16.354 Test: alloc and free memory map ...[2024-12-06 19:04:26.765100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:16.354 passed 00:06:16.354 Test: mem map translation ...[2024-12-06 19:04:26.785536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:16.354 [2024-12-06 19:04:26.785560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:16.354 [2024-12-06 19:04:26.785612] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:16.354 [2024-12-06 19:04:26.785624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:16.355 passed 00:06:16.355 Test: mem map registration ...[2024-12-06 19:04:26.828124] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:16.355 [2024-12-06 19:04:26.828143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:16.355 passed 00:06:16.355 Test: mem map adjacent registrations ...passed 00:06:16.355 00:06:16.355 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.355 suites 1 1 n/a 0 0 00:06:16.355 tests 4 4 4 0 0 00:06:16.355 asserts 152 152 152 0 n/a 00:06:16.355 00:06:16.355 Elapsed time = 0.144 seconds 00:06:16.355 00:06:16.355 real 0m0.153s 00:06:16.355 user 0m0.143s 00:06:16.355 sys 0m0.009s 00:06:16.355 19:04:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.355 19:04:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:16.355 ************************************ 00:06:16.355 END TEST env_memory 00:06:16.355 ************************************ 00:06:16.355 19:04:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.355 19:04:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.355 19:04:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.355 19:04:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.355 ************************************ 00:06:16.355 START TEST env_vtophys 00:06:16.355 ************************************ 00:06:16.355 19:04:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.613 EAL: lib.eal log level changed from notice to debug 00:06:16.613 EAL: Detected lcore 0 as core 0 on socket 0 00:06:16.613 EAL: Detected lcore 1 as core 1 on socket 0 00:06:16.613 EAL: Detected lcore 2 as core 2 on socket 0 00:06:16.613 EAL: Detected lcore 3 as core 3 on socket 0 00:06:16.613 EAL: Detected lcore 4 as core 4 on socket 0 00:06:16.613 EAL: Detected lcore 5 as core 5 on socket 0 00:06:16.613 EAL: Detected lcore 6 as core 8 on socket 0 00:06:16.613 EAL: Detected lcore 7 as core 9 on socket 0 00:06:16.613 EAL: Detected lcore 8 as core 10 on socket 0 00:06:16.613 EAL: Detected lcore 9 as core 11 on socket 0 00:06:16.613 EAL: Detected lcore 10 as core 12 on socket 0 00:06:16.613 EAL: Detected lcore 11 as core 13 on socket 0 00:06:16.613 EAL: Detected lcore 12 as core 0 on socket 1 00:06:16.613 EAL: Detected lcore 13 as core 1 on socket 1 00:06:16.613 EAL: Detected lcore 14 as core 2 on socket 1 00:06:16.613 EAL: Detected lcore 15 as core 3 on socket 1 00:06:16.613 EAL: Detected lcore 16 as core 4 on socket 1 00:06:16.613 EAL: Detected lcore 17 as core 5 on socket 1 00:06:16.613 EAL: Detected lcore 18 as core 8 on socket 1 00:06:16.613 EAL: Detected lcore 19 as core 9 on socket 1 00:06:16.613 EAL: Detected lcore 20 as core 10 on socket 1 00:06:16.613 EAL: Detected lcore 21 as core 11 on socket 1 00:06:16.613 EAL: Detected lcore 22 as core 12 on socket 1 00:06:16.613 EAL: Detected lcore 23 as core 13 on socket 1 00:06:16.613 EAL: Detected lcore 24 as core 0 on socket 0 00:06:16.613 EAL: Detected lcore 25 as core 1 on socket 0 00:06:16.613 EAL: Detected lcore 26 as core 2 on socket 0 00:06:16.613 EAL: Detected lcore 27 as core 3 on socket 0 00:06:16.613 EAL: Detected lcore 28 as core 4 on socket 0 00:06:16.613 EAL: Detected lcore 29 as core 5 on socket 0 00:06:16.613 EAL: Detected lcore 30 as core 8 on socket 0 00:06:16.613 EAL: Detected lcore 31 as core 9 on socket 0 00:06:16.613 EAL: Detected lcore 32 as core 10 on socket 0 00:06:16.613 EAL: Detected lcore 33 as core 11 on socket 0 00:06:16.613 EAL: Detected lcore 34 as core 12 on socket 0 00:06:16.613 EAL: Detected lcore 35 as core 13 on socket 0 00:06:16.613 EAL: Detected lcore 36 as core 0 on socket 1 00:06:16.613 EAL: Detected lcore 37 as core 1 on socket 1 00:06:16.613 EAL: Detected lcore 38 as core 2 on socket 1 00:06:16.613 EAL: Detected lcore 39 as core 3 on socket 1 00:06:16.613 EAL: Detected lcore 40 as core 4 on socket 1 00:06:16.613 EAL: Detected lcore 41 as core 5 on socket 1 00:06:16.613 EAL: Detected lcore 42 as core 8 on socket 1 00:06:16.613 EAL: Detected lcore 43 as core 9 on socket 1 00:06:16.613 EAL: Detected lcore 44 as core 10 on socket 1 00:06:16.613 EAL: Detected lcore 45 as core 11 on socket 1 00:06:16.613 EAL: Detected lcore 46 as core 12 on socket 1 00:06:16.613 EAL: Detected lcore 47 as core 13 on socket 1 00:06:16.613 EAL: Maximum logical cores by configuration: 128 00:06:16.613 EAL: Detected CPU lcores: 48 00:06:16.613 EAL: Detected NUMA nodes: 2 00:06:16.613 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:16.613 EAL: Detected shared linkage of DPDK 00:06:16.613 EAL: No shared files mode enabled, IPC will be disabled 00:06:16.613 EAL: Bus pci wants IOVA as 'DC' 00:06:16.613 EAL: Buses did not request a specific IOVA mode. 00:06:16.613 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:16.613 EAL: Selected IOVA mode 'VA' 00:06:16.613 EAL: Probing VFIO support... 00:06:16.613 EAL: IOMMU type 1 (Type 1) is supported 00:06:16.613 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:16.613 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:16.613 EAL: VFIO support initialized 00:06:16.613 EAL: Ask a virtual area of 0x2e000 bytes 00:06:16.613 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:16.613 EAL: Setting up physically contiguous memory... 00:06:16.613 EAL: Setting maximum number of open files to 524288 00:06:16.613 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:16.613 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:16.613 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:16.613 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.613 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:16.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.613 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.613 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:16.613 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:16.613 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.613 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:16.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.613 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.613 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:16.613 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:16.613 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.613 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:16.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.613 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.613 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:16.613 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:16.613 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.613 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:16.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.613 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.613 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:16.614 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:16.614 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:16.614 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.614 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:16.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.614 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.614 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:16.614 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:16.614 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.614 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:16.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.614 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.614 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:16.614 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:16.614 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.614 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:16.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.614 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.614 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:16.614 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:16.614 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.614 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:16.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.614 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.614 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:16.614 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:16.614 EAL: Hugepages will be freed exactly as allocated. 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: TSC frequency is ~2700000 KHz 00:06:16.614 EAL: Main lcore 0 is ready (tid=7ff5d9753a00;cpuset=[0]) 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 0 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 2MB 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:16.614 EAL: Mem event callback 'spdk:(nil)' registered 00:06:16.614 00:06:16.614 00:06:16.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.614 http://cunit.sourceforge.net/ 00:06:16.614 00:06:16.614 00:06:16.614 Suite: components_suite 00:06:16.614 Test: vtophys_malloc_test ...passed 00:06:16.614 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 4MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 4MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 6MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 6MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 10MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 10MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 18MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 18MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 34MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 34MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 66MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 66MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.614 EAL: Restoring previous memory policy: 4 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was expanded by 130MB 00:06:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.614 EAL: request: mp_malloc_sync 00:06:16.614 EAL: No shared files mode enabled, IPC is disabled 00:06:16.614 EAL: Heap on socket 0 was shrunk by 130MB 00:06:16.614 EAL: Trying to obtain current memory policy. 00:06:16.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.875 EAL: Restoring previous memory policy: 4 00:06:16.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.875 EAL: request: mp_malloc_sync 00:06:16.875 EAL: No shared files mode enabled, IPC is disabled 00:06:16.875 EAL: Heap on socket 0 was expanded by 258MB 00:06:16.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.875 EAL: request: mp_malloc_sync 00:06:16.875 EAL: No shared files mode enabled, IPC is disabled 00:06:16.875 EAL: Heap on socket 0 was shrunk by 258MB 00:06:16.875 EAL: Trying to obtain current memory policy. 00:06:16.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.162 EAL: Restoring previous memory policy: 4 00:06:17.162 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.162 EAL: request: mp_malloc_sync 00:06:17.162 EAL: No shared files mode enabled, IPC is disabled 00:06:17.162 EAL: Heap on socket 0 was expanded by 514MB 00:06:17.162 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.162 EAL: request: mp_malloc_sync 00:06:17.162 EAL: No shared files mode enabled, IPC is disabled 00:06:17.162 EAL: Heap on socket 0 was shrunk by 514MB 00:06:17.162 EAL: Trying to obtain current memory policy. 00:06:17.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.425 EAL: Restoring previous memory policy: 4 00:06:17.425 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.425 EAL: request: mp_malloc_sync 00:06:17.425 EAL: No shared files mode enabled, IPC is disabled 00:06:17.425 EAL: Heap on socket 0 was expanded by 1026MB 00:06:17.682 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.939 EAL: request: mp_malloc_sync 00:06:17.939 EAL: No shared files mode enabled, IPC is disabled 00:06:17.939 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:17.939 passed 00:06:17.939 00:06:17.939 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.939 suites 1 1 n/a 0 0 00:06:17.939 tests 2 2 2 0 0 00:06:17.939 asserts 497 497 497 0 n/a 00:06:17.939 00:06:17.939 Elapsed time = 1.326 seconds 00:06:17.939 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.939 EAL: request: mp_malloc_sync 00:06:17.939 EAL: No shared files mode enabled, IPC is disabled 00:06:17.939 EAL: Heap on socket 0 was shrunk by 2MB 00:06:17.939 EAL: No shared files mode enabled, IPC is disabled 00:06:17.939 EAL: No shared files mode enabled, IPC is disabled 00:06:17.939 EAL: No shared files mode enabled, IPC is disabled 00:06:17.939 00:06:17.939 real 0m1.451s 00:06:17.939 user 0m0.861s 00:06:17.939 sys 0m0.554s 00:06:17.939 19:04:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.939 19:04:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:17.939 ************************************ 00:06:17.939 END TEST env_vtophys 00:06:17.939 ************************************ 00:06:17.939 19:04:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.939 19:04:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.939 19:04:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.939 19:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.939 ************************************ 00:06:17.939 START TEST env_pci 00:06:17.939 ************************************ 00:06:17.939 19:04:28 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.939 00:06:17.939 00:06:17.939 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.939 http://cunit.sourceforge.net/ 00:06:17.939 00:06:17.939 00:06:17.939 Suite: pci 00:06:17.939 Test: pci_hook ...[2024-12-06 19:04:28.441614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 989959 has claimed it 00:06:17.939 EAL: Cannot find device (10000:00:01.0) 00:06:17.939 EAL: Failed to attach device on primary process 00:06:17.939 passed 00:06:17.939 00:06:17.939 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.939 suites 1 1 n/a 0 0 00:06:17.939 tests 1 1 1 0 0 00:06:17.939 asserts 25 25 25 0 n/a 00:06:17.939 00:06:17.939 Elapsed time = 0.022 seconds 00:06:17.939 00:06:17.939 real 0m0.035s 00:06:17.939 user 0m0.013s 00:06:17.939 sys 0m0.022s 00:06:17.939 19:04:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.939 19:04:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:17.939 ************************************ 00:06:17.939 END TEST env_pci 00:06:17.939 ************************************ 00:06:17.939 19:04:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:17.939 19:04:28 env -- env/env.sh@15 -- # uname 00:06:17.939 19:04:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:17.939 19:04:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:17.939 19:04:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.939 19:04:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:17.939 19:04:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.939 19:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.199 ************************************ 00:06:18.199 START TEST env_dpdk_post_init 00:06:18.199 ************************************ 00:06:18.199 19:04:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.199 EAL: Detected CPU lcores: 48 00:06:18.199 EAL: Detected NUMA nodes: 2 00:06:18.199 EAL: Detected shared linkage of DPDK 00:06:18.199 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.199 EAL: Selected IOVA mode 'VA' 00:06:18.199 EAL: VFIO support initialized 00:06:18.199 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.199 EAL: Using IOMMU type 1 (Type 1) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:18.199 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:18.459 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:18.459 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:18.459 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:18.459 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:19.029 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:22.308 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:22.308 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:22.567 Starting DPDK initialization... 00:06:22.567 Starting SPDK post initialization... 00:06:22.567 SPDK NVMe probe 00:06:22.567 Attaching to 0000:88:00.0 00:06:22.567 Attached to 0000:88:00.0 00:06:22.567 Cleaning up... 00:06:22.567 00:06:22.567 real 0m4.452s 00:06:22.567 user 0m3.081s 00:06:22.567 sys 0m0.430s 00:06:22.567 19:04:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.567 19:04:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 END TEST env_dpdk_post_init 00:06:22.567 ************************************ 00:06:22.567 19:04:32 env -- env/env.sh@26 -- # uname 00:06:22.567 19:04:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:22.567 19:04:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.567 19:04:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.567 19:04:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.567 19:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 START TEST env_mem_callbacks 00:06:22.567 ************************************ 00:06:22.567 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.567 EAL: Detected CPU lcores: 48 00:06:22.567 EAL: Detected NUMA nodes: 2 00:06:22.567 EAL: Detected shared linkage of DPDK 00:06:22.567 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.567 EAL: Selected IOVA mode 'VA' 00:06:22.567 EAL: VFIO support initialized 00:06:22.567 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:22.567 00:06:22.567 00:06:22.567 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.567 http://cunit.sourceforge.net/ 00:06:22.567 00:06:22.567 00:06:22.567 Suite: memory 00:06:22.567 Test: test ... 00:06:22.567 register 0x200000200000 2097152 00:06:22.567 malloc 3145728 00:06:22.567 register 0x200000400000 4194304 00:06:22.567 buf 0x200000500000 len 3145728 PASSED 00:06:22.567 malloc 64 00:06:22.567 buf 0x2000004fff40 len 64 PASSED 00:06:22.567 malloc 4194304 00:06:22.567 register 0x200000800000 6291456 00:06:22.567 buf 0x200000a00000 len 4194304 PASSED 00:06:22.567 free 0x200000500000 3145728 00:06:22.567 free 0x2000004fff40 64 00:06:22.567 unregister 0x200000400000 4194304 PASSED 00:06:22.567 free 0x200000a00000 4194304 00:06:22.567 unregister 0x200000800000 6291456 PASSED 00:06:22.567 malloc 8388608 00:06:22.567 register 0x200000400000 10485760 00:06:22.567 buf 0x200000600000 len 8388608 PASSED 00:06:22.567 free 0x200000600000 8388608 00:06:22.567 unregister 0x200000400000 10485760 PASSED 00:06:22.567 passed 00:06:22.567 00:06:22.567 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.567 suites 1 1 n/a 0 0 00:06:22.567 tests 1 1 1 0 0 00:06:22.567 asserts 15 15 15 0 n/a 00:06:22.567 00:06:22.567 Elapsed time = 0.005 seconds 00:06:22.567 00:06:22.567 real 0m0.048s 00:06:22.567 user 0m0.016s 00:06:22.567 sys 0m0.032s 00:06:22.567 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.567 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 END TEST env_mem_callbacks 00:06:22.567 ************************************ 00:06:22.567 00:06:22.567 real 0m6.530s 00:06:22.567 user 0m4.311s 00:06:22.567 sys 0m1.263s 00:06:22.567 19:04:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.567 19:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 END TEST env 00:06:22.567 ************************************ 00:06:22.567 19:04:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.567 19:04:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.567 19:04:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.567 19:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.826 START TEST rpc 00:06:22.826 ************************************ 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.826 * Looking for test storage... 00:06:22.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.826 19:04:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.826 19:04:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.826 19:04:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.826 19:04:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.826 19:04:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.826 19:04:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.826 19:04:33 rpc -- scripts/common.sh@345 -- # : 1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.826 19:04:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.826 19:04:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.826 19:04:33 rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.826 19:04:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.826 19:04:33 rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.826 19:04:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.826 19:04:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.826 19:04:33 rpc -- scripts/common.sh@368 -- # return 0 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.826 --rc genhtml_branch_coverage=1 00:06:22.826 --rc genhtml_function_coverage=1 00:06:22.826 --rc genhtml_legend=1 00:06:22.826 --rc geninfo_all_blocks=1 00:06:22.826 --rc geninfo_unexecuted_blocks=1 00:06:22.826 00:06:22.826 ' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.826 --rc genhtml_branch_coverage=1 00:06:22.826 --rc genhtml_function_coverage=1 00:06:22.826 --rc genhtml_legend=1 00:06:22.826 --rc geninfo_all_blocks=1 00:06:22.826 --rc geninfo_unexecuted_blocks=1 00:06:22.826 00:06:22.826 ' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.826 --rc genhtml_branch_coverage=1 00:06:22.826 --rc genhtml_function_coverage=1 00:06:22.826 --rc genhtml_legend=1 00:06:22.826 --rc geninfo_all_blocks=1 00:06:22.826 --rc geninfo_unexecuted_blocks=1 00:06:22.826 00:06:22.826 ' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.826 --rc genhtml_branch_coverage=1 00:06:22.826 --rc genhtml_function_coverage=1 00:06:22.826 --rc genhtml_legend=1 00:06:22.826 --rc geninfo_all_blocks=1 00:06:22.826 --rc geninfo_unexecuted_blocks=1 00:06:22.826 00:06:22.826 ' 00:06:22.826 19:04:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=990744 00:06:22.826 19:04:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:22.826 19:04:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.826 19:04:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 990744 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 990744 ']' 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.826 19:04:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.826 [2024-12-06 19:04:33.343150] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:22.826 [2024-12-06 19:04:33.343238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990744 ] 00:06:23.084 [2024-12-06 19:04:33.408988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.084 [2024-12-06 19:04:33.463634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:23.084 [2024-12-06 19:04:33.463703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 990744' to capture a snapshot of events at runtime. 00:06:23.084 [2024-12-06 19:04:33.463732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.084 [2024-12-06 19:04:33.463743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.084 [2024-12-06 19:04:33.463752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid990744 for offline analysis/debug. 00:06:23.084 [2024-12-06 19:04:33.464310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.342 19:04:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.342 19:04:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.342 19:04:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.342 19:04:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.342 19:04:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:23.342 19:04:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:23.342 19:04:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.342 19:04:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.342 19:04:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 ************************************ 00:06:23.342 START TEST rpc_integrity 00:06:23.342 ************************************ 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.342 { 00:06:23.342 "name": "Malloc0", 00:06:23.342 "aliases": [ 00:06:23.342 "d72a6e3f-5bee-4699-82d7-207ae8d2bd94" 00:06:23.342 ], 00:06:23.342 "product_name": "Malloc disk", 00:06:23.342 "block_size": 512, 00:06:23.342 "num_blocks": 16384, 00:06:23.342 "uuid": "d72a6e3f-5bee-4699-82d7-207ae8d2bd94", 00:06:23.342 "assigned_rate_limits": { 00:06:23.342 "rw_ios_per_sec": 0, 00:06:23.342 "rw_mbytes_per_sec": 0, 00:06:23.342 "r_mbytes_per_sec": 0, 00:06:23.342 "w_mbytes_per_sec": 0 00:06:23.342 }, 00:06:23.342 "claimed": false, 00:06:23.342 "zoned": false, 00:06:23.342 "supported_io_types": { 00:06:23.342 "read": true, 00:06:23.342 "write": true, 00:06:23.342 "unmap": true, 00:06:23.342 "flush": true, 00:06:23.342 "reset": true, 00:06:23.342 "nvme_admin": false, 00:06:23.342 "nvme_io": false, 00:06:23.342 "nvme_io_md": false, 00:06:23.342 "write_zeroes": true, 00:06:23.342 "zcopy": true, 00:06:23.342 "get_zone_info": false, 00:06:23.342 "zone_management": false, 00:06:23.342 "zone_append": false, 00:06:23.342 "compare": false, 00:06:23.342 "compare_and_write": false, 00:06:23.342 "abort": true, 00:06:23.342 "seek_hole": false, 00:06:23.342 "seek_data": false, 00:06:23.342 "copy": true, 00:06:23.342 "nvme_iov_md": false 00:06:23.342 }, 00:06:23.342 "memory_domains": [ 00:06:23.342 { 00:06:23.342 "dma_device_id": "system", 00:06:23.342 "dma_device_type": 1 00:06:23.342 }, 00:06:23.342 { 00:06:23.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.342 "dma_device_type": 2 00:06:23.342 } 00:06:23.342 ], 00:06:23.342 "driver_specific": {} 00:06:23.342 } 00:06:23.342 ]' 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 [2024-12-06 19:04:33.859629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:23.342 [2024-12-06 19:04:33.859696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.342 [2024-12-06 19:04:33.859722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15d0020 00:06:23.342 [2024-12-06 19:04:33.859737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.342 [2024-12-06 19:04:33.861130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.342 [2024-12-06 19:04:33.861153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.342 Passthru0 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.342 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.342 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.342 { 00:06:23.342 "name": "Malloc0", 00:06:23.342 "aliases": [ 00:06:23.342 "d72a6e3f-5bee-4699-82d7-207ae8d2bd94" 00:06:23.342 ], 00:06:23.343 "product_name": "Malloc disk", 00:06:23.343 "block_size": 512, 00:06:23.343 "num_blocks": 16384, 00:06:23.343 "uuid": "d72a6e3f-5bee-4699-82d7-207ae8d2bd94", 00:06:23.343 "assigned_rate_limits": { 00:06:23.343 "rw_ios_per_sec": 0, 00:06:23.343 "rw_mbytes_per_sec": 0, 00:06:23.343 "r_mbytes_per_sec": 0, 00:06:23.343 "w_mbytes_per_sec": 0 00:06:23.343 }, 00:06:23.343 "claimed": true, 00:06:23.343 "claim_type": "exclusive_write", 00:06:23.343 "zoned": false, 00:06:23.343 "supported_io_types": { 00:06:23.343 "read": true, 00:06:23.343 "write": true, 00:06:23.343 "unmap": true, 00:06:23.343 "flush": true, 00:06:23.343 "reset": true, 00:06:23.343 "nvme_admin": false, 00:06:23.343 "nvme_io": false, 00:06:23.343 "nvme_io_md": false, 00:06:23.343 "write_zeroes": true, 00:06:23.343 "zcopy": true, 00:06:23.343 "get_zone_info": false, 00:06:23.343 "zone_management": false, 00:06:23.343 "zone_append": false, 00:06:23.343 "compare": false, 00:06:23.343 "compare_and_write": false, 00:06:23.343 "abort": true, 00:06:23.343 "seek_hole": false, 00:06:23.343 "seek_data": false, 00:06:23.343 "copy": true, 00:06:23.343 "nvme_iov_md": false 00:06:23.343 }, 00:06:23.343 "memory_domains": [ 00:06:23.343 { 00:06:23.343 "dma_device_id": "system", 00:06:23.343 "dma_device_type": 1 00:06:23.343 }, 00:06:23.343 { 00:06:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.343 "dma_device_type": 2 00:06:23.343 } 00:06:23.343 ], 00:06:23.343 "driver_specific": {} 00:06:23.343 }, 00:06:23.343 { 00:06:23.343 "name": "Passthru0", 00:06:23.343 "aliases": [ 00:06:23.343 "96811cb8-dc02-5f8e-bdfb-8631739a81ec" 00:06:23.343 ], 00:06:23.343 "product_name": "passthru", 00:06:23.343 "block_size": 512, 00:06:23.343 "num_blocks": 16384, 00:06:23.343 "uuid": "96811cb8-dc02-5f8e-bdfb-8631739a81ec", 00:06:23.343 "assigned_rate_limits": { 00:06:23.343 "rw_ios_per_sec": 0, 00:06:23.343 "rw_mbytes_per_sec": 0, 00:06:23.343 "r_mbytes_per_sec": 0, 00:06:23.343 "w_mbytes_per_sec": 0 00:06:23.343 }, 00:06:23.343 "claimed": false, 00:06:23.343 "zoned": false, 00:06:23.343 "supported_io_types": { 00:06:23.343 "read": true, 00:06:23.343 "write": true, 00:06:23.343 "unmap": true, 00:06:23.343 "flush": true, 00:06:23.343 "reset": true, 00:06:23.343 "nvme_admin": false, 00:06:23.343 "nvme_io": false, 00:06:23.343 "nvme_io_md": false, 00:06:23.343 "write_zeroes": true, 00:06:23.343 "zcopy": true, 00:06:23.343 "get_zone_info": false, 00:06:23.343 "zone_management": false, 00:06:23.343 "zone_append": false, 00:06:23.343 "compare": false, 00:06:23.343 "compare_and_write": false, 00:06:23.343 "abort": true, 00:06:23.343 "seek_hole": false, 00:06:23.343 "seek_data": false, 00:06:23.343 "copy": true, 00:06:23.343 "nvme_iov_md": false 00:06:23.343 }, 00:06:23.343 "memory_domains": [ 00:06:23.343 { 00:06:23.343 "dma_device_id": "system", 00:06:23.343 "dma_device_type": 1 00:06:23.343 }, 00:06:23.343 { 00:06:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.343 "dma_device_type": 2 00:06:23.343 } 00:06:23.343 ], 00:06:23.343 "driver_specific": { 00:06:23.343 "passthru": { 00:06:23.343 "name": "Passthru0", 00:06:23.343 "base_bdev_name": "Malloc0" 00:06:23.343 } 00:06:23.343 } 00:06:23.343 } 00:06:23.343 ]' 00:06:23.343 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.343 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.343 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.343 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.343 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.602 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.602 19:04:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.602 00:06:23.602 real 0m0.218s 00:06:23.602 user 0m0.138s 00:06:23.602 sys 0m0.025s 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.602 19:04:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 ************************************ 00:06:23.602 END TEST rpc_integrity 00:06:23.602 ************************************ 00:06:23.602 19:04:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:23.602 19:04:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.602 19:04:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.602 19:04:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 ************************************ 00:06:23.602 START TEST rpc_plugins 00:06:23.602 ************************************ 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:23.602 { 00:06:23.602 "name": "Malloc1", 00:06:23.602 "aliases": [ 00:06:23.602 "02179d10-3443-4238-a830-4e56fa039aab" 00:06:23.602 ], 00:06:23.602 "product_name": "Malloc disk", 00:06:23.602 "block_size": 4096, 00:06:23.602 "num_blocks": 256, 00:06:23.602 "uuid": "02179d10-3443-4238-a830-4e56fa039aab", 00:06:23.602 "assigned_rate_limits": { 00:06:23.602 "rw_ios_per_sec": 0, 00:06:23.602 "rw_mbytes_per_sec": 0, 00:06:23.602 "r_mbytes_per_sec": 0, 00:06:23.602 "w_mbytes_per_sec": 0 00:06:23.602 }, 00:06:23.602 "claimed": false, 00:06:23.602 "zoned": false, 00:06:23.602 "supported_io_types": { 00:06:23.602 "read": true, 00:06:23.602 "write": true, 00:06:23.602 "unmap": true, 00:06:23.602 "flush": true, 00:06:23.602 "reset": true, 00:06:23.602 "nvme_admin": false, 00:06:23.602 "nvme_io": false, 00:06:23.602 "nvme_io_md": false, 00:06:23.602 "write_zeroes": true, 00:06:23.602 "zcopy": true, 00:06:23.602 "get_zone_info": false, 00:06:23.602 "zone_management": false, 00:06:23.602 "zone_append": false, 00:06:23.602 "compare": false, 00:06:23.602 "compare_and_write": false, 00:06:23.602 "abort": true, 00:06:23.602 "seek_hole": false, 00:06:23.602 "seek_data": false, 00:06:23.602 "copy": true, 00:06:23.602 "nvme_iov_md": false 00:06:23.602 }, 00:06:23.602 "memory_domains": [ 00:06:23.602 { 00:06:23.602 "dma_device_id": "system", 00:06:23.602 "dma_device_type": 1 00:06:23.602 }, 00:06:23.602 { 00:06:23.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.602 "dma_device_type": 2 00:06:23.602 } 00:06:23.602 ], 00:06:23.602 "driver_specific": {} 00:06:23.602 } 00:06:23.602 ]' 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:23.602 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:23.602 00:06:23.602 real 0m0.109s 00:06:23.602 user 0m0.068s 00:06:23.602 sys 0m0.009s 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 ************************************ 00:06:23.602 END TEST rpc_plugins 00:06:23.602 ************************************ 00:06:23.602 19:04:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:23.602 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.602 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.602 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 ************************************ 00:06:23.602 START TEST rpc_trace_cmd_test 00:06:23.602 ************************************ 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.602 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:23.602 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid990744", 00:06:23.602 "tpoint_group_mask": "0x8", 00:06:23.602 "iscsi_conn": { 00:06:23.602 "mask": "0x2", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "scsi": { 00:06:23.602 "mask": "0x4", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "bdev": { 00:06:23.602 "mask": "0x8", 00:06:23.602 "tpoint_mask": "0xffffffffffffffff" 00:06:23.602 }, 00:06:23.602 "nvmf_rdma": { 00:06:23.602 "mask": "0x10", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "nvmf_tcp": { 00:06:23.602 "mask": "0x20", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "ftl": { 00:06:23.602 "mask": "0x40", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "blobfs": { 00:06:23.602 "mask": "0x80", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "dsa": { 00:06:23.602 "mask": "0x200", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "thread": { 00:06:23.602 "mask": "0x400", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "nvme_pcie": { 00:06:23.602 "mask": "0x800", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "iaa": { 00:06:23.602 "mask": "0x1000", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.602 }, 00:06:23.602 "nvme_tcp": { 00:06:23.602 "mask": "0x2000", 00:06:23.602 "tpoint_mask": "0x0" 00:06:23.603 }, 00:06:23.603 "bdev_nvme": { 00:06:23.603 "mask": "0x4000", 00:06:23.603 "tpoint_mask": "0x0" 00:06:23.603 }, 00:06:23.603 "sock": { 00:06:23.603 "mask": "0x8000", 00:06:23.603 "tpoint_mask": "0x0" 00:06:23.603 }, 00:06:23.603 "blob": { 00:06:23.603 "mask": "0x10000", 00:06:23.603 "tpoint_mask": "0x0" 00:06:23.603 }, 00:06:23.603 "bdev_raid": { 00:06:23.603 "mask": "0x20000", 00:06:23.603 "tpoint_mask": "0x0" 00:06:23.603 }, 00:06:23.603 "scheduler": { 00:06:23.603 "mask": "0x40000", 00:06:23.603 "tpoint_mask": "0x0" 00:06:23.603 } 00:06:23.603 }' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:23.860 00:06:23.860 real 0m0.202s 00:06:23.860 user 0m0.176s 00:06:23.860 sys 0m0.020s 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.860 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.860 ************************************ 00:06:23.860 END TEST rpc_trace_cmd_test 00:06:23.860 ************************************ 00:06:23.860 19:04:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:23.860 19:04:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:23.860 19:04:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:23.860 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.860 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.860 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.860 ************************************ 00:06:23.860 START TEST rpc_daemon_integrity 00:06:23.861 ************************************ 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.861 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.119 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.120 { 00:06:24.120 "name": "Malloc2", 00:06:24.120 "aliases": [ 00:06:24.120 "94a4ca8a-015b-4bef-85fd-92a7406e82fa" 00:06:24.120 ], 00:06:24.120 "product_name": "Malloc disk", 00:06:24.120 "block_size": 512, 00:06:24.120 "num_blocks": 16384, 00:06:24.120 "uuid": "94a4ca8a-015b-4bef-85fd-92a7406e82fa", 00:06:24.120 "assigned_rate_limits": { 00:06:24.120 "rw_ios_per_sec": 0, 00:06:24.120 "rw_mbytes_per_sec": 0, 00:06:24.120 "r_mbytes_per_sec": 0, 00:06:24.120 "w_mbytes_per_sec": 0 00:06:24.120 }, 00:06:24.120 "claimed": false, 00:06:24.120 "zoned": false, 00:06:24.120 "supported_io_types": { 00:06:24.120 "read": true, 00:06:24.120 "write": true, 00:06:24.120 "unmap": true, 00:06:24.120 "flush": true, 00:06:24.120 "reset": true, 00:06:24.120 "nvme_admin": false, 00:06:24.120 "nvme_io": false, 00:06:24.120 "nvme_io_md": false, 00:06:24.120 "write_zeroes": true, 00:06:24.120 "zcopy": true, 00:06:24.120 "get_zone_info": false, 00:06:24.120 "zone_management": false, 00:06:24.120 "zone_append": false, 00:06:24.120 "compare": false, 00:06:24.120 "compare_and_write": false, 00:06:24.120 "abort": true, 00:06:24.120 "seek_hole": false, 00:06:24.120 "seek_data": false, 00:06:24.120 "copy": true, 00:06:24.120 "nvme_iov_md": false 00:06:24.120 }, 00:06:24.120 "memory_domains": [ 00:06:24.120 { 00:06:24.120 "dma_device_id": "system", 00:06:24.120 "dma_device_type": 1 00:06:24.120 }, 00:06:24.120 { 00:06:24.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.120 "dma_device_type": 2 00:06:24.120 } 00:06:24.120 ], 00:06:24.120 "driver_specific": {} 00:06:24.120 } 00:06:24.120 ]' 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.120 [2024-12-06 19:04:34.514126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:24.120 [2024-12-06 19:04:34.514181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.120 [2024-12-06 19:04:34.514205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151f320 00:06:24.120 [2024-12-06 19:04:34.514223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.120 [2024-12-06 19:04:34.515390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.120 [2024-12-06 19:04:34.515413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.120 Passthru0 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.120 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.120 { 00:06:24.120 "name": "Malloc2", 00:06:24.120 "aliases": [ 00:06:24.120 "94a4ca8a-015b-4bef-85fd-92a7406e82fa" 00:06:24.120 ], 00:06:24.120 "product_name": "Malloc disk", 00:06:24.120 "block_size": 512, 00:06:24.120 "num_blocks": 16384, 00:06:24.120 "uuid": "94a4ca8a-015b-4bef-85fd-92a7406e82fa", 00:06:24.120 "assigned_rate_limits": { 00:06:24.120 "rw_ios_per_sec": 0, 00:06:24.120 "rw_mbytes_per_sec": 0, 00:06:24.120 "r_mbytes_per_sec": 0, 00:06:24.120 "w_mbytes_per_sec": 0 00:06:24.120 }, 00:06:24.120 "claimed": true, 00:06:24.120 "claim_type": "exclusive_write", 00:06:24.120 "zoned": false, 00:06:24.120 "supported_io_types": { 00:06:24.120 "read": true, 00:06:24.120 "write": true, 00:06:24.120 "unmap": true, 00:06:24.120 "flush": true, 00:06:24.120 "reset": true, 00:06:24.120 "nvme_admin": false, 00:06:24.120 "nvme_io": false, 00:06:24.120 "nvme_io_md": false, 00:06:24.120 "write_zeroes": true, 00:06:24.120 "zcopy": true, 00:06:24.120 "get_zone_info": false, 00:06:24.120 "zone_management": false, 00:06:24.120 "zone_append": false, 00:06:24.120 "compare": false, 00:06:24.120 "compare_and_write": false, 00:06:24.120 "abort": true, 00:06:24.120 "seek_hole": false, 00:06:24.120 "seek_data": false, 00:06:24.120 "copy": true, 00:06:24.120 "nvme_iov_md": false 00:06:24.120 }, 00:06:24.120 "memory_domains": [ 00:06:24.120 { 00:06:24.120 "dma_device_id": "system", 00:06:24.120 "dma_device_type": 1 00:06:24.120 }, 00:06:24.120 { 00:06:24.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.120 "dma_device_type": 2 00:06:24.120 } 00:06:24.120 ], 00:06:24.120 "driver_specific": {} 00:06:24.120 }, 00:06:24.120 { 00:06:24.120 "name": "Passthru0", 00:06:24.120 "aliases": [ 00:06:24.120 "021c3fd1-52e3-500a-8ba7-bd8a2dec28c9" 00:06:24.120 ], 00:06:24.120 "product_name": "passthru", 00:06:24.120 "block_size": 512, 00:06:24.120 "num_blocks": 16384, 00:06:24.120 "uuid": "021c3fd1-52e3-500a-8ba7-bd8a2dec28c9", 00:06:24.120 "assigned_rate_limits": { 00:06:24.120 "rw_ios_per_sec": 0, 00:06:24.120 "rw_mbytes_per_sec": 0, 00:06:24.120 "r_mbytes_per_sec": 0, 00:06:24.120 "w_mbytes_per_sec": 0 00:06:24.120 }, 00:06:24.120 "claimed": false, 00:06:24.120 "zoned": false, 00:06:24.120 "supported_io_types": { 00:06:24.120 "read": true, 00:06:24.120 "write": true, 00:06:24.120 "unmap": true, 00:06:24.120 "flush": true, 00:06:24.120 "reset": true, 00:06:24.121 "nvme_admin": false, 00:06:24.121 "nvme_io": false, 00:06:24.121 "nvme_io_md": false, 00:06:24.121 "write_zeroes": true, 00:06:24.121 "zcopy": true, 00:06:24.121 "get_zone_info": false, 00:06:24.121 "zone_management": false, 00:06:24.121 "zone_append": false, 00:06:24.121 "compare": false, 00:06:24.121 "compare_and_write": false, 00:06:24.121 "abort": true, 00:06:24.121 "seek_hole": false, 00:06:24.121 "seek_data": false, 00:06:24.121 "copy": true, 00:06:24.121 "nvme_iov_md": false 00:06:24.121 }, 00:06:24.121 "memory_domains": [ 00:06:24.121 { 00:06:24.121 "dma_device_id": "system", 00:06:24.121 "dma_device_type": 1 00:06:24.121 }, 00:06:24.121 { 00:06:24.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.121 "dma_device_type": 2 00:06:24.121 } 00:06:24.121 ], 00:06:24.121 "driver_specific": { 00:06:24.121 "passthru": { 00:06:24.121 "name": "Passthru0", 00:06:24.121 "base_bdev_name": "Malloc2" 00:06:24.121 } 00:06:24.121 } 00:06:24.121 } 00:06:24.121 ]' 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.121 00:06:24.121 real 0m0.211s 00:06:24.121 user 0m0.141s 00:06:24.121 sys 0m0.016s 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.121 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 ************************************ 00:06:24.121 END TEST rpc_daemon_integrity 00:06:24.121 ************************************ 00:06:24.121 19:04:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:24.121 19:04:34 rpc -- rpc/rpc.sh@84 -- # killprocess 990744 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 990744 ']' 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@958 -- # kill -0 990744 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 990744 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 990744' 00:06:24.121 killing process with pid 990744 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@973 -- # kill 990744 00:06:24.121 19:04:34 rpc -- common/autotest_common.sh@978 -- # wait 990744 00:06:24.686 00:06:24.686 real 0m1.951s 00:06:24.686 user 0m2.407s 00:06:24.686 sys 0m0.613s 00:06:24.686 19:04:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.686 19:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.686 ************************************ 00:06:24.686 END TEST rpc 00:06:24.686 ************************************ 00:06:24.686 19:04:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:24.686 19:04:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.686 19:04:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.686 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.686 ************************************ 00:06:24.686 START TEST skip_rpc 00:06:24.686 ************************************ 00:06:24.686 19:04:35 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:24.686 * Looking for test storage... 00:06:24.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:24.686 19:04:35 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.686 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.686 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.944 19:04:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.944 --rc genhtml_branch_coverage=1 00:06:24.944 --rc genhtml_function_coverage=1 00:06:24.944 --rc genhtml_legend=1 00:06:24.944 --rc geninfo_all_blocks=1 00:06:24.944 --rc geninfo_unexecuted_blocks=1 00:06:24.944 00:06:24.944 ' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.944 --rc genhtml_branch_coverage=1 00:06:24.944 --rc genhtml_function_coverage=1 00:06:24.944 --rc genhtml_legend=1 00:06:24.944 --rc geninfo_all_blocks=1 00:06:24.944 --rc geninfo_unexecuted_blocks=1 00:06:24.944 00:06:24.944 ' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.944 --rc genhtml_branch_coverage=1 00:06:24.944 --rc genhtml_function_coverage=1 00:06:24.944 --rc genhtml_legend=1 00:06:24.944 --rc geninfo_all_blocks=1 00:06:24.944 --rc geninfo_unexecuted_blocks=1 00:06:24.944 00:06:24.944 ' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.944 --rc genhtml_branch_coverage=1 00:06:24.944 --rc genhtml_function_coverage=1 00:06:24.944 --rc genhtml_legend=1 00:06:24.944 --rc geninfo_all_blocks=1 00:06:24.944 --rc geninfo_unexecuted_blocks=1 00:06:24.944 00:06:24.944 ' 00:06:24.944 19:04:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.944 19:04:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:24.944 19:04:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.944 19:04:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.944 ************************************ 00:06:24.944 START TEST skip_rpc 00:06:24.944 ************************************ 00:06:24.944 19:04:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:24.944 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=991100 00:06:24.944 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:24.944 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.944 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:24.944 [2024-12-06 19:04:35.367733] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:24.944 [2024-12-06 19:04:35.367827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991100 ] 00:06:24.944 [2024-12-06 19:04:35.432578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.944 [2024-12-06 19:04:35.491244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 991100 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 991100 ']' 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 991100 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991100 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991100' 00:06:30.197 killing process with pid 991100 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 991100 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 991100 00:06:30.197 00:06:30.197 real 0m5.440s 00:06:30.197 user 0m5.146s 00:06:30.197 sys 0m0.307s 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.197 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.197 ************************************ 00:06:30.197 END TEST skip_rpc 00:06:30.197 ************************************ 00:06:30.455 19:04:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:30.455 19:04:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.455 19:04:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.455 19:04:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.455 ************************************ 00:06:30.455 START TEST skip_rpc_with_json 00:06:30.455 ************************************ 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=991764 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 991764 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 991764 ']' 00:06:30.455 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.456 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.456 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.456 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.456 19:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.456 [2024-12-06 19:04:40.862736] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:30.456 [2024-12-06 19:04:40.862826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991764 ] 00:06:30.456 [2024-12-06 19:04:40.927544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.456 [2024-12-06 19:04:40.983786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.714 [2024-12-06 19:04:41.256636] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:30.714 request: 00:06:30.714 { 00:06:30.714 "trtype": "tcp", 00:06:30.714 "method": "nvmf_get_transports", 00:06:30.714 "req_id": 1 00:06:30.714 } 00:06:30.714 Got JSON-RPC error response 00:06:30.714 response: 00:06:30.714 { 00:06:30.714 "code": -19, 00:06:30.714 "message": "No such device" 00:06:30.714 } 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.714 [2024-12-06 19:04:41.264782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.714 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.972 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.972 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.972 { 00:06:30.972 "subsystems": [ 00:06:30.972 { 00:06:30.972 "subsystem": "fsdev", 00:06:30.972 "config": [ 00:06:30.972 { 00:06:30.972 "method": "fsdev_set_opts", 00:06:30.972 "params": { 00:06:30.972 "fsdev_io_pool_size": 65535, 00:06:30.972 "fsdev_io_cache_size": 256 00:06:30.972 } 00:06:30.972 } 00:06:30.972 ] 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "subsystem": "vfio_user_target", 00:06:30.972 "config": null 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "subsystem": "keyring", 00:06:30.972 "config": [] 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "subsystem": "iobuf", 00:06:30.972 "config": [ 00:06:30.972 { 00:06:30.972 "method": "iobuf_set_options", 00:06:30.972 "params": { 00:06:30.972 "small_pool_count": 8192, 00:06:30.972 "large_pool_count": 1024, 00:06:30.972 "small_bufsize": 8192, 00:06:30.972 "large_bufsize": 135168, 00:06:30.972 "enable_numa": false 00:06:30.972 } 00:06:30.972 } 00:06:30.972 ] 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "subsystem": "sock", 00:06:30.972 "config": [ 00:06:30.972 { 00:06:30.972 "method": "sock_set_default_impl", 00:06:30.972 "params": { 00:06:30.972 "impl_name": "posix" 00:06:30.972 } 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "method": "sock_impl_set_options", 00:06:30.972 "params": { 00:06:30.972 "impl_name": "ssl", 00:06:30.972 "recv_buf_size": 4096, 00:06:30.972 "send_buf_size": 4096, 00:06:30.972 "enable_recv_pipe": true, 00:06:30.972 "enable_quickack": false, 00:06:30.972 "enable_placement_id": 0, 00:06:30.972 "enable_zerocopy_send_server": true, 00:06:30.972 "enable_zerocopy_send_client": false, 00:06:30.972 "zerocopy_threshold": 0, 00:06:30.972 "tls_version": 0, 00:06:30.972 "enable_ktls": false 00:06:30.972 } 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "method": "sock_impl_set_options", 00:06:30.972 "params": { 00:06:30.972 "impl_name": "posix", 00:06:30.972 "recv_buf_size": 2097152, 00:06:30.972 "send_buf_size": 2097152, 00:06:30.972 "enable_recv_pipe": true, 00:06:30.972 "enable_quickack": false, 00:06:30.972 "enable_placement_id": 0, 00:06:30.972 "enable_zerocopy_send_server": true, 00:06:30.972 "enable_zerocopy_send_client": false, 00:06:30.972 "zerocopy_threshold": 0, 00:06:30.972 "tls_version": 0, 00:06:30.972 "enable_ktls": false 00:06:30.972 } 00:06:30.972 } 00:06:30.972 ] 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "subsystem": "vmd", 00:06:30.972 "config": [] 00:06:30.972 }, 00:06:30.972 { 00:06:30.973 "subsystem": "accel", 00:06:30.973 "config": [ 00:06:30.973 { 00:06:30.973 "method": "accel_set_options", 00:06:30.973 "params": { 00:06:30.973 "small_cache_size": 128, 00:06:30.973 "large_cache_size": 16, 00:06:30.973 "task_count": 2048, 00:06:30.973 "sequence_count": 2048, 00:06:30.973 "buf_count": 2048 00:06:30.973 } 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "bdev", 00:06:30.973 "config": [ 00:06:30.973 { 00:06:30.973 "method": "bdev_set_options", 00:06:30.973 "params": { 00:06:30.973 "bdev_io_pool_size": 65535, 00:06:30.973 "bdev_io_cache_size": 256, 00:06:30.973 "bdev_auto_examine": true, 00:06:30.973 "iobuf_small_cache_size": 128, 00:06:30.973 "iobuf_large_cache_size": 16 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "bdev_raid_set_options", 00:06:30.973 "params": { 00:06:30.973 "process_window_size_kb": 1024, 00:06:30.973 "process_max_bandwidth_mb_sec": 0 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "bdev_iscsi_set_options", 00:06:30.973 "params": { 00:06:30.973 "timeout_sec": 30 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "bdev_nvme_set_options", 00:06:30.973 "params": { 00:06:30.973 "action_on_timeout": "none", 00:06:30.973 "timeout_us": 0, 00:06:30.973 "timeout_admin_us": 0, 00:06:30.973 "keep_alive_timeout_ms": 10000, 00:06:30.973 "arbitration_burst": 0, 00:06:30.973 "low_priority_weight": 0, 00:06:30.973 "medium_priority_weight": 0, 00:06:30.973 "high_priority_weight": 0, 00:06:30.973 "nvme_adminq_poll_period_us": 10000, 00:06:30.973 "nvme_ioq_poll_period_us": 0, 00:06:30.973 "io_queue_requests": 0, 00:06:30.973 "delay_cmd_submit": true, 00:06:30.973 "transport_retry_count": 4, 00:06:30.973 "bdev_retry_count": 3, 00:06:30.973 "transport_ack_timeout": 0, 00:06:30.973 "ctrlr_loss_timeout_sec": 0, 00:06:30.973 "reconnect_delay_sec": 0, 00:06:30.973 "fast_io_fail_timeout_sec": 0, 00:06:30.973 "disable_auto_failback": false, 00:06:30.973 "generate_uuids": false, 00:06:30.973 "transport_tos": 0, 00:06:30.973 "nvme_error_stat": false, 00:06:30.973 "rdma_srq_size": 0, 00:06:30.973 "io_path_stat": false, 00:06:30.973 "allow_accel_sequence": false, 00:06:30.973 "rdma_max_cq_size": 0, 00:06:30.973 "rdma_cm_event_timeout_ms": 0, 00:06:30.973 "dhchap_digests": [ 00:06:30.973 "sha256", 00:06:30.973 "sha384", 00:06:30.973 "sha512" 00:06:30.973 ], 00:06:30.973 "dhchap_dhgroups": [ 00:06:30.973 "null", 00:06:30.973 "ffdhe2048", 00:06:30.973 "ffdhe3072", 00:06:30.973 "ffdhe4096", 00:06:30.973 "ffdhe6144", 00:06:30.973 "ffdhe8192" 00:06:30.973 ], 00:06:30.973 "rdma_umr_per_io": false 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "bdev_nvme_set_hotplug", 00:06:30.973 "params": { 00:06:30.973 "period_us": 100000, 00:06:30.973 "enable": false 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "bdev_wait_for_examine" 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "scsi", 00:06:30.973 "config": null 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "scheduler", 00:06:30.973 "config": [ 00:06:30.973 { 00:06:30.973 "method": "framework_set_scheduler", 00:06:30.973 "params": { 00:06:30.973 "name": "static" 00:06:30.973 } 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "vhost_scsi", 00:06:30.973 "config": [] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "vhost_blk", 00:06:30.973 "config": [] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "ublk", 00:06:30.973 "config": [] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "nbd", 00:06:30.973 "config": [] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "nvmf", 00:06:30.973 "config": [ 00:06:30.973 { 00:06:30.973 "method": "nvmf_set_config", 00:06:30.973 "params": { 00:06:30.973 "discovery_filter": "match_any", 00:06:30.973 "admin_cmd_passthru": { 00:06:30.973 "identify_ctrlr": false 00:06:30.973 }, 00:06:30.973 "dhchap_digests": [ 00:06:30.973 "sha256", 00:06:30.973 "sha384", 00:06:30.973 "sha512" 00:06:30.973 ], 00:06:30.973 "dhchap_dhgroups": [ 00:06:30.973 "null", 00:06:30.973 "ffdhe2048", 00:06:30.973 "ffdhe3072", 00:06:30.973 "ffdhe4096", 00:06:30.973 "ffdhe6144", 00:06:30.973 "ffdhe8192" 00:06:30.973 ] 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "nvmf_set_max_subsystems", 00:06:30.973 "params": { 00:06:30.973 "max_subsystems": 1024 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "nvmf_set_crdt", 00:06:30.973 "params": { 00:06:30.973 "crdt1": 0, 00:06:30.973 "crdt2": 0, 00:06:30.973 "crdt3": 0 00:06:30.973 } 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "method": "nvmf_create_transport", 00:06:30.973 "params": { 00:06:30.973 "trtype": "TCP", 00:06:30.973 "max_queue_depth": 128, 00:06:30.973 "max_io_qpairs_per_ctrlr": 127, 00:06:30.973 "in_capsule_data_size": 4096, 00:06:30.973 "max_io_size": 131072, 00:06:30.973 "io_unit_size": 131072, 00:06:30.973 "max_aq_depth": 128, 00:06:30.973 "num_shared_buffers": 511, 00:06:30.973 "buf_cache_size": 4294967295, 00:06:30.973 "dif_insert_or_strip": false, 00:06:30.973 "zcopy": false, 00:06:30.973 "c2h_success": true, 00:06:30.973 "sock_priority": 0, 00:06:30.973 "abort_timeout_sec": 1, 00:06:30.973 "ack_timeout": 0, 00:06:30.973 "data_wr_pool_size": 0 00:06:30.973 } 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 }, 00:06:30.973 { 00:06:30.973 "subsystem": "iscsi", 00:06:30.973 "config": [ 00:06:30.973 { 00:06:30.973 "method": "iscsi_set_options", 00:06:30.973 "params": { 00:06:30.973 "node_base": "iqn.2016-06.io.spdk", 00:06:30.973 "max_sessions": 128, 00:06:30.973 "max_connections_per_session": 2, 00:06:30.973 "max_queue_depth": 64, 00:06:30.973 "default_time2wait": 2, 00:06:30.973 "default_time2retain": 20, 00:06:30.973 "first_burst_length": 8192, 00:06:30.973 "immediate_data": true, 00:06:30.973 "allow_duplicated_isid": false, 00:06:30.973 "error_recovery_level": 0, 00:06:30.973 "nop_timeout": 60, 00:06:30.973 "nop_in_interval": 30, 00:06:30.973 "disable_chap": false, 00:06:30.973 "require_chap": false, 00:06:30.973 "mutual_chap": false, 00:06:30.973 "chap_group": 0, 00:06:30.973 "max_large_datain_per_connection": 64, 00:06:30.973 "max_r2t_per_connection": 4, 00:06:30.973 "pdu_pool_size": 36864, 00:06:30.973 "immediate_data_pool_size": 16384, 00:06:30.973 "data_out_pool_size": 2048 00:06:30.973 } 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 } 00:06:30.973 ] 00:06:30.973 } 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 991764 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 991764 ']' 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 991764 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991764 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991764' 00:06:30.973 killing process with pid 991764 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 991764 00:06:30.973 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 991764 00:06:31.550 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=991904 00:06:31.550 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:31.550 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 991904 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 991904 ']' 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 991904 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991904 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991904' 00:06:36.813 killing process with pid 991904 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 991904 00:06:36.813 19:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 991904 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:36.813 00:06:36.813 real 0m6.515s 00:06:36.813 user 0m6.175s 00:06:36.813 sys 0m0.666s 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.813 ************************************ 00:06:36.813 END TEST skip_rpc_with_json 00:06:36.813 ************************************ 00:06:36.813 19:04:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:36.813 19:04:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.813 19:04:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.813 19:04:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.813 ************************************ 00:06:36.813 START TEST skip_rpc_with_delay 00:06:36.813 ************************************ 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:36.813 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:37.071 [2024-12-06 19:04:47.433029] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.071 00:06:37.071 real 0m0.075s 00:06:37.071 user 0m0.049s 00:06:37.071 sys 0m0.026s 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.071 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:37.071 ************************************ 00:06:37.071 END TEST skip_rpc_with_delay 00:06:37.071 ************************************ 00:06:37.071 19:04:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:37.071 19:04:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:37.071 19:04:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:37.071 19:04:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.071 19:04:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.071 19:04:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.071 ************************************ 00:06:37.071 START TEST exit_on_failed_rpc_init 00:06:37.071 ************************************ 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=992622 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 992622 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 992622 ']' 00:06:37.071 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.072 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.072 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.072 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.072 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:37.072 [2024-12-06 19:04:47.559857] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:37.072 [2024-12-06 19:04:47.559963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992622 ] 00:06:37.072 [2024-12-06 19:04:47.624863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.331 [2024-12-06 19:04:47.686466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:37.589 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.589 [2024-12-06 19:04:48.009757] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:37.589 [2024-12-06 19:04:48.009847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992742 ] 00:06:37.589 [2024-12-06 19:04:48.074401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.589 [2024-12-06 19:04:48.133625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.589 [2024-12-06 19:04:48.133762] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:37.589 [2024-12-06 19:04:48.133782] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:37.589 [2024-12-06 19:04:48.133793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 992622 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 992622 ']' 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 992622 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 992622 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 992622' 00:06:37.847 killing process with pid 992622 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 992622 00:06:37.847 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 992622 00:06:38.106 00:06:38.106 real 0m1.150s 00:06:38.106 user 0m1.282s 00:06:38.106 sys 0m0.410s 00:06:38.106 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.106 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.106 ************************************ 00:06:38.106 END TEST exit_on_failed_rpc_init 00:06:38.106 ************************************ 00:06:38.106 19:04:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:38.106 00:06:38.106 real 0m13.532s 00:06:38.106 user 0m12.833s 00:06:38.106 sys 0m1.599s 00:06:38.106 19:04:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.106 19:04:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.106 ************************************ 00:06:38.106 END TEST skip_rpc 00:06:38.106 ************************************ 00:06:38.389 19:04:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:38.389 19:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.389 19:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.389 19:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.389 ************************************ 00:06:38.389 START TEST rpc_client 00:06:38.389 ************************************ 00:06:38.389 19:04:48 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:38.389 * Looking for test storage... 00:06:38.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:38.389 19:04:48 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.389 19:04:48 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.389 19:04:48 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.389 19:04:48 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.389 19:04:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.390 19:04:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.390 --rc genhtml_branch_coverage=1 00:06:38.390 --rc genhtml_function_coverage=1 00:06:38.390 --rc genhtml_legend=1 00:06:38.390 --rc geninfo_all_blocks=1 00:06:38.390 --rc geninfo_unexecuted_blocks=1 00:06:38.390 00:06:38.390 ' 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.390 --rc genhtml_branch_coverage=1 00:06:38.390 --rc genhtml_function_coverage=1 00:06:38.390 --rc genhtml_legend=1 00:06:38.390 --rc geninfo_all_blocks=1 00:06:38.390 --rc geninfo_unexecuted_blocks=1 00:06:38.390 00:06:38.390 ' 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.390 --rc genhtml_branch_coverage=1 00:06:38.390 --rc genhtml_function_coverage=1 00:06:38.390 --rc genhtml_legend=1 00:06:38.390 --rc geninfo_all_blocks=1 00:06:38.390 --rc geninfo_unexecuted_blocks=1 00:06:38.390 00:06:38.390 ' 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.390 --rc genhtml_branch_coverage=1 00:06:38.390 --rc genhtml_function_coverage=1 00:06:38.390 --rc genhtml_legend=1 00:06:38.390 --rc geninfo_all_blocks=1 00:06:38.390 --rc geninfo_unexecuted_blocks=1 00:06:38.390 00:06:38.390 ' 00:06:38.390 19:04:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:38.390 OK 00:06:38.390 19:04:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:38.390 00:06:38.390 real 0m0.164s 00:06:38.390 user 0m0.099s 00:06:38.390 sys 0m0.075s 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.390 19:04:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:38.390 ************************************ 00:06:38.390 END TEST rpc_client 00:06:38.390 ************************************ 00:06:38.390 19:04:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:38.390 19:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.390 19:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.390 19:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.390 ************************************ 00:06:38.390 START TEST json_config 00:06:38.390 ************************************ 00:06:38.390 19:04:48 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:38.648 19:04:48 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.648 19:04:48 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.648 19:04:48 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.648 19:04:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.648 19:04:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.648 19:04:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.648 19:04:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.648 19:04:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.648 19:04:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:38.648 19:04:49 json_config -- scripts/common.sh@345 -- # : 1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.648 19:04:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.648 19:04:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@353 -- # local d=1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.648 19:04:49 json_config -- scripts/common.sh@355 -- # echo 1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.648 19:04:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@353 -- # local d=2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.648 19:04:49 json_config -- scripts/common.sh@355 -- # echo 2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.648 19:04:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.648 19:04:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.648 19:04:49 json_config -- scripts/common.sh@368 -- # return 0 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.648 --rc genhtml_branch_coverage=1 00:06:38.648 --rc genhtml_function_coverage=1 00:06:38.648 --rc genhtml_legend=1 00:06:38.648 --rc geninfo_all_blocks=1 00:06:38.648 --rc geninfo_unexecuted_blocks=1 00:06:38.648 00:06:38.648 ' 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.648 --rc genhtml_branch_coverage=1 00:06:38.648 --rc genhtml_function_coverage=1 00:06:38.648 --rc genhtml_legend=1 00:06:38.648 --rc geninfo_all_blocks=1 00:06:38.648 --rc geninfo_unexecuted_blocks=1 00:06:38.648 00:06:38.648 ' 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.648 --rc genhtml_branch_coverage=1 00:06:38.648 --rc genhtml_function_coverage=1 00:06:38.648 --rc genhtml_legend=1 00:06:38.648 --rc geninfo_all_blocks=1 00:06:38.648 --rc geninfo_unexecuted_blocks=1 00:06:38.648 00:06:38.648 ' 00:06:38.648 19:04:49 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.648 --rc genhtml_branch_coverage=1 00:06:38.648 --rc genhtml_function_coverage=1 00:06:38.648 --rc genhtml_legend=1 00:06:38.648 --rc geninfo_all_blocks=1 00:06:38.648 --rc geninfo_unexecuted_blocks=1 00:06:38.648 00:06:38.648 ' 00:06:38.648 19:04:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.648 19:04:49 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.648 19:04:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.648 19:04:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.648 19:04:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.648 19:04:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.649 19:04:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.649 19:04:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.649 19:04:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.649 19:04:49 json_config -- paths/export.sh@5 -- # export PATH 00:06:38.649 19:04:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@51 -- # : 0 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.649 19:04:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:38.649 INFO: JSON configuration test init 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.649 19:04:49 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:38.649 19:04:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:38.649 19:04:49 json_config -- json_config/common.sh@10 -- # shift 00:06:38.649 19:04:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.649 19:04:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.649 19:04:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.649 19:04:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.649 19:04:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.649 19:04:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=993008 00:06:38.649 19:04:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:38.649 19:04:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.649 Waiting for target to run... 00:06:38.649 19:04:49 json_config -- json_config/common.sh@25 -- # waitforlisten 993008 /var/tmp/spdk_tgt.sock 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 993008 ']' 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.649 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.649 [2024-12-06 19:04:49.171176] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:38.649 [2024-12-06 19:04:49.171263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993008 ] 00:06:39.214 [2024-12-06 19:04:49.700508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.214 [2024-12-06 19:04:49.752444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:39.778 19:04:50 json_config -- json_config/common.sh@26 -- # echo '' 00:06:39.778 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.778 19:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:39.778 19:04:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:39.778 19:04:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:43.083 19:04:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.083 19:04:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:43.083 19:04:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:43.083 19:04:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@54 -- # sort 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:43.341 19:04:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.341 19:04:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:43.341 19:04:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.341 19:04:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:43.341 19:04:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:43.341 19:04:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:43.599 MallocForNvmf0 00:06:43.599 19:04:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:43.599 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:43.856 MallocForNvmf1 00:06:43.856 19:04:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:43.856 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:44.113 [2024-12-06 19:04:54.554610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.113 19:04:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:44.113 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:44.371 19:04:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:44.371 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:44.629 19:04:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:44.629 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:44.886 19:04:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:44.886 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:45.143 [2024-12-06 19:04:55.666199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:45.143 19:04:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:45.143 19:04:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.143 19:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.143 19:04:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:45.143 19:04:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.143 19:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.401 19:04:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:45.401 19:04:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:45.401 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:45.659 MallocBdevForConfigChangeCheck 00:06:45.659 19:04:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:45.659 19:04:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.659 19:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.659 19:04:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:45.659 19:04:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:45.916 19:04:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:45.916 INFO: shutting down applications... 00:06:45.916 19:04:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:45.916 19:04:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:45.916 19:04:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:45.916 19:04:56 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:47.814 Calling clear_iscsi_subsystem 00:06:47.814 Calling clear_nvmf_subsystem 00:06:47.814 Calling clear_nbd_subsystem 00:06:47.814 Calling clear_ublk_subsystem 00:06:47.814 Calling clear_vhost_blk_subsystem 00:06:47.814 Calling clear_vhost_scsi_subsystem 00:06:47.814 Calling clear_bdev_subsystem 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:47.814 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:48.074 19:04:58 json_config -- json_config/json_config.sh@352 -- # break 00:06:48.074 19:04:58 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:48.074 19:04:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:48.074 19:04:58 json_config -- json_config/common.sh@31 -- # local app=target 00:06:48.074 19:04:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:48.074 19:04:58 json_config -- json_config/common.sh@35 -- # [[ -n 993008 ]] 00:06:48.074 19:04:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 993008 00:06:48.074 19:04:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:48.074 19:04:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.074 19:04:58 json_config -- json_config/common.sh@41 -- # kill -0 993008 00:06:48.074 19:04:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.642 19:04:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.642 19:04:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.642 19:04:58 json_config -- json_config/common.sh@41 -- # kill -0 993008 00:06:48.643 19:04:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:48.643 19:04:58 json_config -- json_config/common.sh@43 -- # break 00:06:48.643 19:04:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:48.643 19:04:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:48.643 SPDK target shutdown done 00:06:48.643 19:04:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:48.643 INFO: relaunching applications... 00:06:48.643 19:04:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.643 19:04:58 json_config -- json_config/common.sh@9 -- # local app=target 00:06:48.643 19:04:58 json_config -- json_config/common.sh@10 -- # shift 00:06:48.643 19:04:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:48.643 19:04:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:48.643 19:04:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:48.643 19:04:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.643 19:04:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.643 19:04:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=994210 00:06:48.643 19:04:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.643 19:04:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:48.643 Waiting for target to run... 00:06:48.643 19:04:58 json_config -- json_config/common.sh@25 -- # waitforlisten 994210 /var/tmp/spdk_tgt.sock 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 994210 ']' 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.643 19:04:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.643 [2024-12-06 19:04:59.039574] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:48.643 [2024-12-06 19:04:59.039679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994210 ] 00:06:48.907 [2024-12-06 19:04:59.381504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.907 [2024-12-06 19:04:59.423888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.249 [2024-12-06 19:05:02.472336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.249 [2024-12-06 19:05:02.504803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:52.249 19:05:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.249 19:05:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:52.249 19:05:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:52.249 00:06:52.249 19:05:02 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:52.249 19:05:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:52.249 INFO: Checking if target configuration is the same... 00:06:52.249 19:05:02 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.249 19:05:02 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:52.249 19:05:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.249 + '[' 2 -ne 2 ']' 00:06:52.250 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:52.250 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:52.250 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:52.250 +++ basename /dev/fd/62 00:06:52.250 ++ mktemp /tmp/62.XXX 00:06:52.250 + tmp_file_1=/tmp/62.Zz8 00:06:52.250 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.250 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:52.250 + tmp_file_2=/tmp/spdk_tgt_config.json.uD1 00:06:52.250 + ret=0 00:06:52.250 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.509 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.509 + diff -u /tmp/62.Zz8 /tmp/spdk_tgt_config.json.uD1 00:06:52.509 + echo 'INFO: JSON config files are the same' 00:06:52.509 INFO: JSON config files are the same 00:06:52.509 + rm /tmp/62.Zz8 /tmp/spdk_tgt_config.json.uD1 00:06:52.509 + exit 0 00:06:52.509 19:05:02 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:52.510 19:05:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:52.510 INFO: changing configuration and checking if this can be detected... 00:06:52.510 19:05:02 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.510 19:05:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.769 19:05:03 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.769 19:05:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:52.769 19:05:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.769 + '[' 2 -ne 2 ']' 00:06:52.769 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:52.769 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:52.769 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:52.769 +++ basename /dev/fd/62 00:06:52.769 ++ mktemp /tmp/62.XXX 00:06:52.769 + tmp_file_1=/tmp/62.Wrv 00:06:52.769 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.769 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:52.769 + tmp_file_2=/tmp/spdk_tgt_config.json.D8W 00:06:52.769 + ret=0 00:06:52.769 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.338 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.338 + diff -u /tmp/62.Wrv /tmp/spdk_tgt_config.json.D8W 00:06:53.338 + ret=1 00:06:53.338 + echo '=== Start of file: /tmp/62.Wrv ===' 00:06:53.338 + cat /tmp/62.Wrv 00:06:53.338 + echo '=== End of file: /tmp/62.Wrv ===' 00:06:53.338 + echo '' 00:06:53.338 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D8W ===' 00:06:53.338 + cat /tmp/spdk_tgt_config.json.D8W 00:06:53.338 + echo '=== End of file: /tmp/spdk_tgt_config.json.D8W ===' 00:06:53.338 + echo '' 00:06:53.338 + rm /tmp/62.Wrv /tmp/spdk_tgt_config.json.D8W 00:06:53.338 + exit 1 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:53.338 INFO: configuration change detected. 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@324 -- # [[ -n 994210 ]] 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.338 19:05:03 json_config -- json_config/json_config.sh@330 -- # killprocess 994210 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@954 -- # '[' -z 994210 ']' 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@958 -- # kill -0 994210 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@959 -- # uname 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994210 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994210' 00:06:53.338 killing process with pid 994210 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@973 -- # kill 994210 00:06:53.338 19:05:03 json_config -- common/autotest_common.sh@978 -- # wait 994210 00:06:55.302 19:05:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:55.302 19:05:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:55.302 19:05:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.302 19:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.302 19:05:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:55.302 19:05:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:55.302 INFO: Success 00:06:55.302 00:06:55.302 real 0m16.479s 00:06:55.302 user 0m18.221s 00:06:55.302 sys 0m2.625s 00:06:55.302 19:05:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.302 19:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.302 ************************************ 00:06:55.302 END TEST json_config 00:06:55.302 ************************************ 00:06:55.302 19:05:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:55.302 19:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.302 19:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.302 19:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:55.302 ************************************ 00:06:55.302 START TEST json_config_extra_key 00:06:55.302 ************************************ 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.302 --rc genhtml_branch_coverage=1 00:06:55.302 --rc genhtml_function_coverage=1 00:06:55.302 --rc genhtml_legend=1 00:06:55.302 --rc geninfo_all_blocks=1 00:06:55.302 --rc geninfo_unexecuted_blocks=1 00:06:55.302 00:06:55.302 ' 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.302 --rc genhtml_branch_coverage=1 00:06:55.302 --rc genhtml_function_coverage=1 00:06:55.302 --rc genhtml_legend=1 00:06:55.302 --rc geninfo_all_blocks=1 00:06:55.302 --rc geninfo_unexecuted_blocks=1 00:06:55.302 00:06:55.302 ' 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.302 --rc genhtml_branch_coverage=1 00:06:55.302 --rc genhtml_function_coverage=1 00:06:55.302 --rc genhtml_legend=1 00:06:55.302 --rc geninfo_all_blocks=1 00:06:55.302 --rc geninfo_unexecuted_blocks=1 00:06:55.302 00:06:55.302 ' 00:06:55.302 19:05:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.302 --rc genhtml_branch_coverage=1 00:06:55.302 --rc genhtml_function_coverage=1 00:06:55.302 --rc genhtml_legend=1 00:06:55.302 --rc geninfo_all_blocks=1 00:06:55.302 --rc geninfo_unexecuted_blocks=1 00:06:55.302 00:06:55.302 ' 00:06:55.302 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.302 19:05:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.302 19:05:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.303 19:05:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.303 19:05:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.303 19:05:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.303 19:05:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:55.303 19:05:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.303 19:05:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:55.303 INFO: launching applications... 00:06:55.303 19:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=995134 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:55.303 Waiting for target to run... 00:06:55.303 19:05:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 995134 /var/tmp/spdk_tgt.sock 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 995134 ']' 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:55.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.303 19:05:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:55.303 [2024-12-06 19:05:05.669767] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:55.303 [2024-12-06 19:05:05.669862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995134 ] 00:06:55.562 [2024-12-06 19:05:06.017454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.562 [2024-12-06 19:05:06.059265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.129 19:05:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.129 19:05:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:56.129 00:06:56.129 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:56.129 INFO: shutting down applications... 00:06:56.129 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 995134 ]] 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 995134 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 995134 00:06:56.129 19:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 995134 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:56.727 19:05:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:56.727 SPDK target shutdown done 00:06:56.727 19:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:56.727 Success 00:06:56.727 00:06:56.727 real 0m1.682s 00:06:56.727 user 0m1.680s 00:06:56.727 sys 0m0.454s 00:06:56.727 19:05:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.727 19:05:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:56.727 ************************************ 00:06:56.727 END TEST json_config_extra_key 00:06:56.727 ************************************ 00:06:56.727 19:05:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.727 19:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.727 19:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.727 19:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:56.727 ************************************ 00:06:56.727 START TEST alias_rpc 00:06:56.727 ************************************ 00:06:56.727 19:05:07 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.727 * Looking for test storage... 00:06:56.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:56.727 19:05:07 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.727 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.727 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.987 19:05:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.987 --rc genhtml_branch_coverage=1 00:06:56.987 --rc genhtml_function_coverage=1 00:06:56.987 --rc genhtml_legend=1 00:06:56.987 --rc geninfo_all_blocks=1 00:06:56.987 --rc geninfo_unexecuted_blocks=1 00:06:56.987 00:06:56.987 ' 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.987 --rc genhtml_branch_coverage=1 00:06:56.987 --rc genhtml_function_coverage=1 00:06:56.987 --rc genhtml_legend=1 00:06:56.987 --rc geninfo_all_blocks=1 00:06:56.987 --rc geninfo_unexecuted_blocks=1 00:06:56.987 00:06:56.987 ' 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.987 --rc genhtml_branch_coverage=1 00:06:56.987 --rc genhtml_function_coverage=1 00:06:56.987 --rc genhtml_legend=1 00:06:56.987 --rc geninfo_all_blocks=1 00:06:56.987 --rc geninfo_unexecuted_blocks=1 00:06:56.987 00:06:56.987 ' 00:06:56.987 19:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.987 --rc genhtml_branch_coverage=1 00:06:56.987 --rc genhtml_function_coverage=1 00:06:56.987 --rc genhtml_legend=1 00:06:56.987 --rc geninfo_all_blocks=1 00:06:56.987 --rc geninfo_unexecuted_blocks=1 00:06:56.987 00:06:56.987 ' 00:06:56.987 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.987 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=995452 00:06:56.988 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.988 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 995452 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 995452 ']' 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.988 19:05:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.988 [2024-12-06 19:05:07.413724] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:56.988 [2024-12-06 19:05:07.413815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995452 ] 00:06:56.988 [2024-12-06 19:05:07.478092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.988 [2024-12-06 19:05:07.536588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.247 19:05:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.247 19:05:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.247 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:57.815 19:05:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 995452 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 995452 ']' 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 995452 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995452 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995452' 00:06:57.815 killing process with pid 995452 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 995452 00:06:57.815 19:05:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 995452 00:06:58.094 00:06:58.094 real 0m1.356s 00:06:58.094 user 0m1.463s 00:06:58.094 sys 0m0.456s 00:06:58.094 19:05:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.094 19:05:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.094 ************************************ 00:06:58.094 END TEST alias_rpc 00:06:58.094 ************************************ 00:06:58.094 19:05:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:58.094 19:05:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:58.094 19:05:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.094 19:05:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.094 19:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:58.094 ************************************ 00:06:58.094 START TEST spdkcli_tcp 00:06:58.094 ************************************ 00:06:58.094 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:58.094 * Looking for test storage... 00:06:58.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:58.094 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.094 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.094 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.353 19:05:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.353 --rc genhtml_branch_coverage=1 00:06:58.353 --rc genhtml_function_coverage=1 00:06:58.353 --rc genhtml_legend=1 00:06:58.353 --rc geninfo_all_blocks=1 00:06:58.353 --rc geninfo_unexecuted_blocks=1 00:06:58.353 00:06:58.353 ' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.353 --rc genhtml_branch_coverage=1 00:06:58.353 --rc genhtml_function_coverage=1 00:06:58.353 --rc genhtml_legend=1 00:06:58.353 --rc geninfo_all_blocks=1 00:06:58.353 --rc geninfo_unexecuted_blocks=1 00:06:58.353 00:06:58.353 ' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.353 --rc genhtml_branch_coverage=1 00:06:58.353 --rc genhtml_function_coverage=1 00:06:58.353 --rc genhtml_legend=1 00:06:58.353 --rc geninfo_all_blocks=1 00:06:58.353 --rc geninfo_unexecuted_blocks=1 00:06:58.353 00:06:58.353 ' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.353 --rc genhtml_branch_coverage=1 00:06:58.353 --rc genhtml_function_coverage=1 00:06:58.353 --rc genhtml_legend=1 00:06:58.353 --rc geninfo_all_blocks=1 00:06:58.353 --rc geninfo_unexecuted_blocks=1 00:06:58.353 00:06:58.353 ' 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=995646 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:58.353 19:05:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 995646 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 995646 ']' 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.353 19:05:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 [2024-12-06 19:05:08.818434] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:58.353 [2024-12-06 19:05:08.818520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995646 ] 00:06:58.353 [2024-12-06 19:05:08.882781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.611 [2024-12-06 19:05:08.941852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.611 [2024-12-06 19:05:08.941857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.868 19:05:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.868 19:05:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:58.868 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=995733 00:06:58.868 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.868 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:59.128 [ 00:06:59.128 "bdev_malloc_delete", 00:06:59.128 "bdev_malloc_create", 00:06:59.128 "bdev_null_resize", 00:06:59.128 "bdev_null_delete", 00:06:59.128 "bdev_null_create", 00:06:59.128 "bdev_nvme_cuse_unregister", 00:06:59.128 "bdev_nvme_cuse_register", 00:06:59.128 "bdev_opal_new_user", 00:06:59.128 "bdev_opal_set_lock_state", 00:06:59.128 "bdev_opal_delete", 00:06:59.128 "bdev_opal_get_info", 00:06:59.128 "bdev_opal_create", 00:06:59.128 "bdev_nvme_opal_revert", 00:06:59.128 "bdev_nvme_opal_init", 00:06:59.128 "bdev_nvme_send_cmd", 00:06:59.128 "bdev_nvme_set_keys", 00:06:59.128 "bdev_nvme_get_path_iostat", 00:06:59.128 "bdev_nvme_get_mdns_discovery_info", 00:06:59.128 "bdev_nvme_stop_mdns_discovery", 00:06:59.128 "bdev_nvme_start_mdns_discovery", 00:06:59.128 "bdev_nvme_set_multipath_policy", 00:06:59.128 "bdev_nvme_set_preferred_path", 00:06:59.128 "bdev_nvme_get_io_paths", 00:06:59.128 "bdev_nvme_remove_error_injection", 00:06:59.128 "bdev_nvme_add_error_injection", 00:06:59.128 "bdev_nvme_get_discovery_info", 00:06:59.128 "bdev_nvme_stop_discovery", 00:06:59.128 "bdev_nvme_start_discovery", 00:06:59.128 "bdev_nvme_get_controller_health_info", 00:06:59.128 "bdev_nvme_disable_controller", 00:06:59.128 "bdev_nvme_enable_controller", 00:06:59.128 "bdev_nvme_reset_controller", 00:06:59.128 "bdev_nvme_get_transport_statistics", 00:06:59.128 "bdev_nvme_apply_firmware", 00:06:59.128 "bdev_nvme_detach_controller", 00:06:59.128 "bdev_nvme_get_controllers", 00:06:59.128 "bdev_nvme_attach_controller", 00:06:59.128 "bdev_nvme_set_hotplug", 00:06:59.128 "bdev_nvme_set_options", 00:06:59.128 "bdev_passthru_delete", 00:06:59.128 "bdev_passthru_create", 00:06:59.128 "bdev_lvol_set_parent_bdev", 00:06:59.128 "bdev_lvol_set_parent", 00:06:59.128 "bdev_lvol_check_shallow_copy", 00:06:59.128 "bdev_lvol_start_shallow_copy", 00:06:59.128 "bdev_lvol_grow_lvstore", 00:06:59.128 "bdev_lvol_get_lvols", 00:06:59.128 "bdev_lvol_get_lvstores", 00:06:59.128 "bdev_lvol_delete", 00:06:59.128 "bdev_lvol_set_read_only", 00:06:59.128 "bdev_lvol_resize", 00:06:59.128 "bdev_lvol_decouple_parent", 00:06:59.128 "bdev_lvol_inflate", 00:06:59.128 "bdev_lvol_rename", 00:06:59.128 "bdev_lvol_clone_bdev", 00:06:59.128 "bdev_lvol_clone", 00:06:59.128 "bdev_lvol_snapshot", 00:06:59.128 "bdev_lvol_create", 00:06:59.128 "bdev_lvol_delete_lvstore", 00:06:59.128 "bdev_lvol_rename_lvstore", 00:06:59.128 "bdev_lvol_create_lvstore", 00:06:59.128 "bdev_raid_set_options", 00:06:59.128 "bdev_raid_remove_base_bdev", 00:06:59.128 "bdev_raid_add_base_bdev", 00:06:59.128 "bdev_raid_delete", 00:06:59.129 "bdev_raid_create", 00:06:59.129 "bdev_raid_get_bdevs", 00:06:59.129 "bdev_error_inject_error", 00:06:59.129 "bdev_error_delete", 00:06:59.129 "bdev_error_create", 00:06:59.129 "bdev_split_delete", 00:06:59.129 "bdev_split_create", 00:06:59.129 "bdev_delay_delete", 00:06:59.129 "bdev_delay_create", 00:06:59.129 "bdev_delay_update_latency", 00:06:59.129 "bdev_zone_block_delete", 00:06:59.129 "bdev_zone_block_create", 00:06:59.129 "blobfs_create", 00:06:59.129 "blobfs_detect", 00:06:59.129 "blobfs_set_cache_size", 00:06:59.129 "bdev_aio_delete", 00:06:59.129 "bdev_aio_rescan", 00:06:59.129 "bdev_aio_create", 00:06:59.129 "bdev_ftl_set_property", 00:06:59.129 "bdev_ftl_get_properties", 00:06:59.129 "bdev_ftl_get_stats", 00:06:59.129 "bdev_ftl_unmap", 00:06:59.129 "bdev_ftl_unload", 00:06:59.129 "bdev_ftl_delete", 00:06:59.129 "bdev_ftl_load", 00:06:59.129 "bdev_ftl_create", 00:06:59.129 "bdev_virtio_attach_controller", 00:06:59.129 "bdev_virtio_scsi_get_devices", 00:06:59.129 "bdev_virtio_detach_controller", 00:06:59.129 "bdev_virtio_blk_set_hotplug", 00:06:59.129 "bdev_iscsi_delete", 00:06:59.129 "bdev_iscsi_create", 00:06:59.129 "bdev_iscsi_set_options", 00:06:59.129 "accel_error_inject_error", 00:06:59.129 "ioat_scan_accel_module", 00:06:59.129 "dsa_scan_accel_module", 00:06:59.129 "iaa_scan_accel_module", 00:06:59.129 "vfu_virtio_create_fs_endpoint", 00:06:59.129 "vfu_virtio_create_scsi_endpoint", 00:06:59.129 "vfu_virtio_scsi_remove_target", 00:06:59.129 "vfu_virtio_scsi_add_target", 00:06:59.129 "vfu_virtio_create_blk_endpoint", 00:06:59.129 "vfu_virtio_delete_endpoint", 00:06:59.129 "keyring_file_remove_key", 00:06:59.129 "keyring_file_add_key", 00:06:59.129 "keyring_linux_set_options", 00:06:59.129 "fsdev_aio_delete", 00:06:59.129 "fsdev_aio_create", 00:06:59.129 "iscsi_get_histogram", 00:06:59.129 "iscsi_enable_histogram", 00:06:59.129 "iscsi_set_options", 00:06:59.129 "iscsi_get_auth_groups", 00:06:59.129 "iscsi_auth_group_remove_secret", 00:06:59.129 "iscsi_auth_group_add_secret", 00:06:59.129 "iscsi_delete_auth_group", 00:06:59.129 "iscsi_create_auth_group", 00:06:59.129 "iscsi_set_discovery_auth", 00:06:59.129 "iscsi_get_options", 00:06:59.129 "iscsi_target_node_request_logout", 00:06:59.129 "iscsi_target_node_set_redirect", 00:06:59.129 "iscsi_target_node_set_auth", 00:06:59.129 "iscsi_target_node_add_lun", 00:06:59.129 "iscsi_get_stats", 00:06:59.129 "iscsi_get_connections", 00:06:59.129 "iscsi_portal_group_set_auth", 00:06:59.129 "iscsi_start_portal_group", 00:06:59.129 "iscsi_delete_portal_group", 00:06:59.129 "iscsi_create_portal_group", 00:06:59.129 "iscsi_get_portal_groups", 00:06:59.129 "iscsi_delete_target_node", 00:06:59.129 "iscsi_target_node_remove_pg_ig_maps", 00:06:59.129 "iscsi_target_node_add_pg_ig_maps", 00:06:59.129 "iscsi_create_target_node", 00:06:59.129 "iscsi_get_target_nodes", 00:06:59.129 "iscsi_delete_initiator_group", 00:06:59.129 "iscsi_initiator_group_remove_initiators", 00:06:59.129 "iscsi_initiator_group_add_initiators", 00:06:59.129 "iscsi_create_initiator_group", 00:06:59.129 "iscsi_get_initiator_groups", 00:06:59.129 "nvmf_set_crdt", 00:06:59.129 "nvmf_set_config", 00:06:59.129 "nvmf_set_max_subsystems", 00:06:59.129 "nvmf_stop_mdns_prr", 00:06:59.129 "nvmf_publish_mdns_prr", 00:06:59.129 "nvmf_subsystem_get_listeners", 00:06:59.129 "nvmf_subsystem_get_qpairs", 00:06:59.129 "nvmf_subsystem_get_controllers", 00:06:59.129 "nvmf_get_stats", 00:06:59.129 "nvmf_get_transports", 00:06:59.129 "nvmf_create_transport", 00:06:59.129 "nvmf_get_targets", 00:06:59.129 "nvmf_delete_target", 00:06:59.129 "nvmf_create_target", 00:06:59.129 "nvmf_subsystem_allow_any_host", 00:06:59.129 "nvmf_subsystem_set_keys", 00:06:59.129 "nvmf_subsystem_remove_host", 00:06:59.129 "nvmf_subsystem_add_host", 00:06:59.129 "nvmf_ns_remove_host", 00:06:59.129 "nvmf_ns_add_host", 00:06:59.129 "nvmf_subsystem_remove_ns", 00:06:59.129 "nvmf_subsystem_set_ns_ana_group", 00:06:59.129 "nvmf_subsystem_add_ns", 00:06:59.129 "nvmf_subsystem_listener_set_ana_state", 00:06:59.129 "nvmf_discovery_get_referrals", 00:06:59.129 "nvmf_discovery_remove_referral", 00:06:59.129 "nvmf_discovery_add_referral", 00:06:59.129 "nvmf_subsystem_remove_listener", 00:06:59.129 "nvmf_subsystem_add_listener", 00:06:59.129 "nvmf_delete_subsystem", 00:06:59.129 "nvmf_create_subsystem", 00:06:59.129 "nvmf_get_subsystems", 00:06:59.129 "env_dpdk_get_mem_stats", 00:06:59.129 "nbd_get_disks", 00:06:59.129 "nbd_stop_disk", 00:06:59.129 "nbd_start_disk", 00:06:59.129 "ublk_recover_disk", 00:06:59.129 "ublk_get_disks", 00:06:59.129 "ublk_stop_disk", 00:06:59.129 "ublk_start_disk", 00:06:59.129 "ublk_destroy_target", 00:06:59.129 "ublk_create_target", 00:06:59.129 "virtio_blk_create_transport", 00:06:59.129 "virtio_blk_get_transports", 00:06:59.129 "vhost_controller_set_coalescing", 00:06:59.129 "vhost_get_controllers", 00:06:59.129 "vhost_delete_controller", 00:06:59.129 "vhost_create_blk_controller", 00:06:59.129 "vhost_scsi_controller_remove_target", 00:06:59.129 "vhost_scsi_controller_add_target", 00:06:59.129 "vhost_start_scsi_controller", 00:06:59.129 "vhost_create_scsi_controller", 00:06:59.129 "thread_set_cpumask", 00:06:59.129 "scheduler_set_options", 00:06:59.129 "framework_get_governor", 00:06:59.129 "framework_get_scheduler", 00:06:59.129 "framework_set_scheduler", 00:06:59.129 "framework_get_reactors", 00:06:59.129 "thread_get_io_channels", 00:06:59.129 "thread_get_pollers", 00:06:59.129 "thread_get_stats", 00:06:59.129 "framework_monitor_context_switch", 00:06:59.129 "spdk_kill_instance", 00:06:59.129 "log_enable_timestamps", 00:06:59.129 "log_get_flags", 00:06:59.129 "log_clear_flag", 00:06:59.129 "log_set_flag", 00:06:59.129 "log_get_level", 00:06:59.129 "log_set_level", 00:06:59.129 "log_get_print_level", 00:06:59.129 "log_set_print_level", 00:06:59.129 "framework_enable_cpumask_locks", 00:06:59.129 "framework_disable_cpumask_locks", 00:06:59.129 "framework_wait_init", 00:06:59.129 "framework_start_init", 00:06:59.129 "scsi_get_devices", 00:06:59.129 "bdev_get_histogram", 00:06:59.129 "bdev_enable_histogram", 00:06:59.129 "bdev_set_qos_limit", 00:06:59.129 "bdev_set_qd_sampling_period", 00:06:59.129 "bdev_get_bdevs", 00:06:59.129 "bdev_reset_iostat", 00:06:59.129 "bdev_get_iostat", 00:06:59.129 "bdev_examine", 00:06:59.129 "bdev_wait_for_examine", 00:06:59.129 "bdev_set_options", 00:06:59.129 "accel_get_stats", 00:06:59.129 "accel_set_options", 00:06:59.129 "accel_set_driver", 00:06:59.129 "accel_crypto_key_destroy", 00:06:59.129 "accel_crypto_keys_get", 00:06:59.129 "accel_crypto_key_create", 00:06:59.129 "accel_assign_opc", 00:06:59.129 "accel_get_module_info", 00:06:59.129 "accel_get_opc_assignments", 00:06:59.129 "vmd_rescan", 00:06:59.129 "vmd_remove_device", 00:06:59.129 "vmd_enable", 00:06:59.129 "sock_get_default_impl", 00:06:59.129 "sock_set_default_impl", 00:06:59.129 "sock_impl_set_options", 00:06:59.129 "sock_impl_get_options", 00:06:59.129 "iobuf_get_stats", 00:06:59.129 "iobuf_set_options", 00:06:59.129 "keyring_get_keys", 00:06:59.129 "vfu_tgt_set_base_path", 00:06:59.129 "framework_get_pci_devices", 00:06:59.129 "framework_get_config", 00:06:59.129 "framework_get_subsystems", 00:06:59.129 "fsdev_set_opts", 00:06:59.129 "fsdev_get_opts", 00:06:59.129 "trace_get_info", 00:06:59.129 "trace_get_tpoint_group_mask", 00:06:59.129 "trace_disable_tpoint_group", 00:06:59.129 "trace_enable_tpoint_group", 00:06:59.129 "trace_clear_tpoint_mask", 00:06:59.129 "trace_set_tpoint_mask", 00:06:59.129 "notify_get_notifications", 00:06:59.129 "notify_get_types", 00:06:59.129 "spdk_get_version", 00:06:59.129 "rpc_get_methods" 00:06:59.129 ] 00:06:59.129 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.129 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:59.129 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 995646 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 995646 ']' 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 995646 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995646 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995646' 00:06:59.129 killing process with pid 995646 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 995646 00:06:59.129 19:05:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 995646 00:06:59.697 00:06:59.697 real 0m1.354s 00:06:59.697 user 0m2.437s 00:06:59.697 sys 0m0.451s 00:06:59.698 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.698 19:05:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 ************************************ 00:06:59.698 END TEST spdkcli_tcp 00:06:59.698 ************************************ 00:06:59.698 19:05:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:59.698 19:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.698 19:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.698 19:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 ************************************ 00:06:59.698 START TEST dpdk_mem_utility 00:06:59.698 ************************************ 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:59.698 * Looking for test storage... 00:06:59.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.698 19:05:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.698 --rc genhtml_branch_coverage=1 00:06:59.698 --rc genhtml_function_coverage=1 00:06:59.698 --rc genhtml_legend=1 00:06:59.698 --rc geninfo_all_blocks=1 00:06:59.698 --rc geninfo_unexecuted_blocks=1 00:06:59.698 00:06:59.698 ' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.698 --rc genhtml_branch_coverage=1 00:06:59.698 --rc genhtml_function_coverage=1 00:06:59.698 --rc genhtml_legend=1 00:06:59.698 --rc geninfo_all_blocks=1 00:06:59.698 --rc geninfo_unexecuted_blocks=1 00:06:59.698 00:06:59.698 ' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.698 --rc genhtml_branch_coverage=1 00:06:59.698 --rc genhtml_function_coverage=1 00:06:59.698 --rc genhtml_legend=1 00:06:59.698 --rc geninfo_all_blocks=1 00:06:59.698 --rc geninfo_unexecuted_blocks=1 00:06:59.698 00:06:59.698 ' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.698 --rc genhtml_branch_coverage=1 00:06:59.698 --rc genhtml_function_coverage=1 00:06:59.698 --rc genhtml_legend=1 00:06:59.698 --rc geninfo_all_blocks=1 00:06:59.698 --rc geninfo_unexecuted_blocks=1 00:06:59.698 00:06:59.698 ' 00:06:59.698 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:59.698 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=995865 00:06:59.698 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.698 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 995865 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 995865 ']' 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.698 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-12-06 19:05:10.216857] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:06:59.698 [2024-12-06 19:05:10.216970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995865 ] 00:06:59.958 [2024-12-06 19:05:10.289277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.958 [2024-12-06 19:05:10.351835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.219 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.219 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:00.219 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:00.219 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:00.219 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.219 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.219 { 00:07:00.219 "filename": "/tmp/spdk_mem_dump.txt" 00:07:00.219 } 00:07:00.219 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.219 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:00.219 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:00.219 1 heaps totaling size 818.000000 MiB 00:07:00.219 size: 818.000000 MiB heap id: 0 00:07:00.219 end heaps---------- 00:07:00.219 9 mempools totaling size 603.782043 MiB 00:07:00.219 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:00.219 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:00.219 size: 100.555481 MiB name: bdev_io_995865 00:07:00.219 size: 50.003479 MiB name: msgpool_995865 00:07:00.219 size: 36.509338 MiB name: fsdev_io_995865 00:07:00.219 size: 21.763794 MiB name: PDU_Pool 00:07:00.219 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:00.219 size: 4.133484 MiB name: evtpool_995865 00:07:00.219 size: 0.026123 MiB name: Session_Pool 00:07:00.219 end mempools------- 00:07:00.219 6 memzones totaling size 4.142822 MiB 00:07:00.219 size: 1.000366 MiB name: RG_ring_0_995865 00:07:00.219 size: 1.000366 MiB name: RG_ring_1_995865 00:07:00.219 size: 1.000366 MiB name: RG_ring_4_995865 00:07:00.219 size: 1.000366 MiB name: RG_ring_5_995865 00:07:00.219 size: 0.125366 MiB name: RG_ring_2_995865 00:07:00.219 size: 0.015991 MiB name: RG_ring_3_995865 00:07:00.219 end memzones------- 00:07:00.219 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:00.219 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:00.219 list of free elements. size: 10.852478 MiB 00:07:00.219 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:00.219 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:00.219 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:00.219 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:00.219 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:00.219 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:00.219 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:00.219 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:00.219 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:00.219 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:00.219 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:00.219 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:00.219 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:00.219 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:00.219 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:00.219 list of standard malloc elements. size: 199.218628 MiB 00:07:00.219 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:00.219 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:00.219 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:00.219 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:00.219 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:00.219 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:00.219 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:00.219 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:00.219 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:00.219 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:00.219 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:00.219 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:00.219 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:00.219 list of memzone associated elements. size: 607.928894 MiB 00:07:00.219 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:00.219 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:00.219 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:00.219 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:00.219 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:00.219 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_995865_0 00:07:00.219 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:00.219 associated memzone info: size: 48.002930 MiB name: MP_msgpool_995865_0 00:07:00.219 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:00.219 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_995865_0 00:07:00.219 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:00.219 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:00.219 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:00.219 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:00.219 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:00.219 associated memzone info: size: 3.000122 MiB name: MP_evtpool_995865_0 00:07:00.219 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:00.219 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_995865 00:07:00.219 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:00.219 associated memzone info: size: 1.007996 MiB name: MP_evtpool_995865 00:07:00.219 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:00.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:00.219 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:00.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:00.219 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:00.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:00.219 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:00.219 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:00.219 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:00.219 associated memzone info: size: 1.000366 MiB name: RG_ring_0_995865 00:07:00.219 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:00.219 associated memzone info: size: 1.000366 MiB name: RG_ring_1_995865 00:07:00.219 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:00.219 associated memzone info: size: 1.000366 MiB name: RG_ring_4_995865 00:07:00.219 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:00.219 associated memzone info: size: 1.000366 MiB name: RG_ring_5_995865 00:07:00.220 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:00.220 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_995865 00:07:00.220 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:00.220 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_995865 00:07:00.220 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:00.220 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:00.220 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:00.220 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:00.220 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:00.220 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:00.220 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:00.220 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_995865 00:07:00.220 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:00.220 associated memzone info: size: 0.125366 MiB name: RG_ring_2_995865 00:07:00.220 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:00.220 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:00.220 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:00.220 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:00.220 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:00.220 associated memzone info: size: 0.015991 MiB name: RG_ring_3_995865 00:07:00.220 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:00.220 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:00.220 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:00.220 associated memzone info: size: 0.000183 MiB name: MP_msgpool_995865 00:07:00.220 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:00.220 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_995865 00:07:00.220 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:00.220 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_995865 00:07:00.220 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:00.220 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:00.220 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:00.220 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 995865 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 995865 ']' 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 995865 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995865 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995865' 00:07:00.220 killing process with pid 995865 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 995865 00:07:00.220 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 995865 00:07:00.789 00:07:00.789 real 0m1.172s 00:07:00.789 user 0m1.165s 00:07:00.789 sys 0m0.417s 00:07:00.789 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.789 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.789 ************************************ 00:07:00.789 END TEST dpdk_mem_utility 00:07:00.789 ************************************ 00:07:00.789 19:05:11 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.789 19:05:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.789 19:05:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.789 19:05:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.789 ************************************ 00:07:00.789 START TEST event 00:07:00.789 ************************************ 00:07:00.789 19:05:11 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.789 * Looking for test storage... 00:07:00.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:00.789 19:05:11 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.789 19:05:11 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.789 19:05:11 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.789 19:05:11 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.789 19:05:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.789 19:05:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.789 19:05:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.789 19:05:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.789 19:05:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.789 19:05:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.789 19:05:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.789 19:05:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.789 19:05:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.789 19:05:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.789 19:05:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.789 19:05:11 event -- scripts/common.sh@344 -- # case "$op" in 00:07:00.789 19:05:11 event -- scripts/common.sh@345 -- # : 1 00:07:00.789 19:05:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.789 19:05:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.789 19:05:11 event -- scripts/common.sh@365 -- # decimal 1 00:07:00.789 19:05:11 event -- scripts/common.sh@353 -- # local d=1 00:07:00.789 19:05:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.789 19:05:11 event -- scripts/common.sh@355 -- # echo 1 00:07:00.789 19:05:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.789 19:05:11 event -- scripts/common.sh@366 -- # decimal 2 00:07:00.789 19:05:11 event -- scripts/common.sh@353 -- # local d=2 00:07:00.789 19:05:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.789 19:05:11 event -- scripts/common.sh@355 -- # echo 2 00:07:01.050 19:05:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.050 19:05:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.050 19:05:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.050 19:05:11 event -- scripts/common.sh@368 -- # return 0 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.050 --rc genhtml_branch_coverage=1 00:07:01.050 --rc genhtml_function_coverage=1 00:07:01.050 --rc genhtml_legend=1 00:07:01.050 --rc geninfo_all_blocks=1 00:07:01.050 --rc geninfo_unexecuted_blocks=1 00:07:01.050 00:07:01.050 ' 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.050 --rc genhtml_branch_coverage=1 00:07:01.050 --rc genhtml_function_coverage=1 00:07:01.050 --rc genhtml_legend=1 00:07:01.050 --rc geninfo_all_blocks=1 00:07:01.050 --rc geninfo_unexecuted_blocks=1 00:07:01.050 00:07:01.050 ' 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.050 --rc genhtml_branch_coverage=1 00:07:01.050 --rc genhtml_function_coverage=1 00:07:01.050 --rc genhtml_legend=1 00:07:01.050 --rc geninfo_all_blocks=1 00:07:01.050 --rc geninfo_unexecuted_blocks=1 00:07:01.050 00:07:01.050 ' 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.050 --rc genhtml_branch_coverage=1 00:07:01.050 --rc genhtml_function_coverage=1 00:07:01.050 --rc genhtml_legend=1 00:07:01.050 --rc geninfo_all_blocks=1 00:07:01.050 --rc geninfo_unexecuted_blocks=1 00:07:01.050 00:07:01.050 ' 00:07:01.050 19:05:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:01.050 19:05:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:01.050 19:05:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:01.050 19:05:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.050 19:05:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.050 ************************************ 00:07:01.050 START TEST event_perf 00:07:01.050 ************************************ 00:07:01.050 19:05:11 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:01.050 Running I/O for 1 seconds...[2024-12-06 19:05:11.407062] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:01.050 [2024-12-06 19:05:11.407131] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996100 ] 00:07:01.050 [2024-12-06 19:05:11.472183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.050 [2024-12-06 19:05:11.535159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.050 [2024-12-06 19:05:11.535223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.050 [2024-12-06 19:05:11.535289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.050 [2024-12-06 19:05:11.535292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.432 Running I/O for 1 seconds... 00:07:02.432 lcore 0: 226421 00:07:02.432 lcore 1: 226420 00:07:02.432 lcore 2: 226420 00:07:02.432 lcore 3: 226420 00:07:02.432 done. 00:07:02.432 00:07:02.432 real 0m1.204s 00:07:02.432 user 0m4.137s 00:07:02.432 sys 0m0.063s 00:07:02.432 19:05:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.432 19:05:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.432 ************************************ 00:07:02.432 END TEST event_perf 00:07:02.432 ************************************ 00:07:02.432 19:05:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:02.432 19:05:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:02.432 19:05:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.432 19:05:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.432 ************************************ 00:07:02.432 START TEST event_reactor 00:07:02.432 ************************************ 00:07:02.432 19:05:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:02.432 [2024-12-06 19:05:12.663564] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:02.432 [2024-12-06 19:05:12.663640] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996338 ] 00:07:02.432 [2024-12-06 19:05:12.729554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.432 [2024-12-06 19:05:12.784521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.371 test_start 00:07:03.371 oneshot 00:07:03.371 tick 100 00:07:03.371 tick 100 00:07:03.371 tick 250 00:07:03.371 tick 100 00:07:03.371 tick 100 00:07:03.371 tick 100 00:07:03.371 tick 250 00:07:03.371 tick 500 00:07:03.371 tick 100 00:07:03.371 tick 100 00:07:03.371 tick 250 00:07:03.371 tick 100 00:07:03.371 tick 100 00:07:03.371 test_end 00:07:03.371 00:07:03.371 real 0m1.198s 00:07:03.371 user 0m1.130s 00:07:03.371 sys 0m0.064s 00:07:03.371 19:05:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.371 19:05:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:03.371 ************************************ 00:07:03.371 END TEST event_reactor 00:07:03.371 ************************************ 00:07:03.371 19:05:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.371 19:05:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:03.371 19:05:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.371 19:05:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.371 ************************************ 00:07:03.371 START TEST event_reactor_perf 00:07:03.371 ************************************ 00:07:03.371 19:05:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.371 [2024-12-06 19:05:13.911651] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:03.371 [2024-12-06 19:05:13.911747] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996492 ] 00:07:03.630 [2024-12-06 19:05:13.978123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.630 [2024-12-06 19:05:14.031864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.568 test_start 00:07:04.568 test_end 00:07:04.568 Performance: 446582 events per second 00:07:04.568 00:07:04.568 real 0m1.197s 00:07:04.568 user 0m1.126s 00:07:04.568 sys 0m0.067s 00:07:04.568 19:05:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.568 19:05:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.568 ************************************ 00:07:04.568 END TEST event_reactor_perf 00:07:04.568 ************************************ 00:07:04.568 19:05:15 event -- event/event.sh@49 -- # uname -s 00:07:04.568 19:05:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.568 19:05:15 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.568 19:05:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.568 19:05:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.568 19:05:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.828 ************************************ 00:07:04.828 START TEST event_scheduler 00:07:04.828 ************************************ 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.828 * Looking for test storage... 00:07:04.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.828 19:05:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.828 --rc genhtml_branch_coverage=1 00:07:04.828 --rc genhtml_function_coverage=1 00:07:04.828 --rc genhtml_legend=1 00:07:04.828 --rc geninfo_all_blocks=1 00:07:04.828 --rc geninfo_unexecuted_blocks=1 00:07:04.828 00:07:04.828 ' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.828 --rc genhtml_branch_coverage=1 00:07:04.828 --rc genhtml_function_coverage=1 00:07:04.828 --rc genhtml_legend=1 00:07:04.828 --rc geninfo_all_blocks=1 00:07:04.828 --rc geninfo_unexecuted_blocks=1 00:07:04.828 00:07:04.828 ' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.828 --rc genhtml_branch_coverage=1 00:07:04.828 --rc genhtml_function_coverage=1 00:07:04.828 --rc genhtml_legend=1 00:07:04.828 --rc geninfo_all_blocks=1 00:07:04.828 --rc geninfo_unexecuted_blocks=1 00:07:04.828 00:07:04.828 ' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.828 --rc genhtml_branch_coverage=1 00:07:04.828 --rc genhtml_function_coverage=1 00:07:04.828 --rc genhtml_legend=1 00:07:04.828 --rc geninfo_all_blocks=1 00:07:04.828 --rc geninfo_unexecuted_blocks=1 00:07:04.828 00:07:04.828 ' 00:07:04.828 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.828 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=996678 00:07:04.828 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.828 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.828 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 996678 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 996678 ']' 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.828 19:05:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.828 [2024-12-06 19:05:15.338752] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:04.828 [2024-12-06 19:05:15.338834] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996678 ] 00:07:05.085 [2024-12-06 19:05:15.405331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.085 [2024-12-06 19:05:15.465506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.085 [2024-12-06 19:05:15.465568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.085 [2024-12-06 19:05:15.465636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.085 [2024-12-06 19:05:15.465639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:05.085 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.085 [2024-12-06 19:05:15.570513] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:05.085 [2024-12-06 19:05:15.570538] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.085 [2024-12-06 19:05:15.570571] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.085 [2024-12-06 19:05:15.570583] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.085 [2024-12-06 19:05:15.570593] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.085 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.085 19:05:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 [2024-12-06 19:05:15.664144] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.343 19:05:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.343 19:05:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.343 19:05:15 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 ************************************ 00:07:05.343 START TEST scheduler_create_thread 00:07:05.343 ************************************ 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 2 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 3 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 4 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 5 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 6 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 7 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.343 8 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.343 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.344 9 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.344 10 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.344 19:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.912 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.912 00:07:05.912 real 0m0.591s 00:07:05.912 user 0m0.009s 00:07:05.912 sys 0m0.004s 00:07:05.912 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.912 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.912 ************************************ 00:07:05.912 END TEST scheduler_create_thread 00:07:05.912 ************************************ 00:07:05.912 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:05.912 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 996678 00:07:05.912 19:05:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 996678 ']' 00:07:05.912 19:05:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 996678 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 996678 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 996678' 00:07:05.913 killing process with pid 996678 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 996678 00:07:05.913 19:05:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 996678 00:07:06.476 [2024-12-06 19:05:16.764356] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:06.476 00:07:06.476 real 0m1.827s 00:07:06.476 user 0m2.490s 00:07:06.476 sys 0m0.349s 00:07:06.476 19:05:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.476 19:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.476 ************************************ 00:07:06.476 END TEST event_scheduler 00:07:06.476 ************************************ 00:07:06.476 19:05:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:06.476 19:05:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:06.476 19:05:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.476 19:05:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.476 19:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.476 ************************************ 00:07:06.476 START TEST app_repeat 00:07:06.476 ************************************ 00:07:06.476 19:05:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=996882 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 996882' 00:07:06.476 Process app_repeat pid: 996882 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:06.476 spdk_app_start Round 0 00:07:06.476 19:05:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 996882 /var/tmp/spdk-nbd.sock 00:07:06.476 19:05:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 996882 ']' 00:07:06.476 19:05:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.476 19:05:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.476 19:05:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.477 19:05:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.477 19:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.735 [2024-12-06 19:05:17.060869] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:06.735 [2024-12-06 19:05:17.060933] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996882 ] 00:07:06.735 [2024-12-06 19:05:17.128195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.735 [2024-12-06 19:05:17.185959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.735 [2024-12-06 19:05:17.185962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.994 19:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.994 19:05:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.994 19:05:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.272 Malloc0 00:07:07.272 19:05:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.529 Malloc1 00:07:07.529 19:05:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.529 19:05:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.786 /dev/nbd0 00:07:07.786 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.786 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.786 1+0 records in 00:07:07.786 1+0 records out 00:07:07.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179177 s, 22.9 MB/s 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.786 19:05:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.786 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.786 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.786 19:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.044 /dev/nbd1 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.044 1+0 records in 00:07:08.044 1+0 records out 00:07:08.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238019 s, 17.2 MB/s 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.044 19:05:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.044 19:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.302 { 00:07:08.302 "nbd_device": "/dev/nbd0", 00:07:08.302 "bdev_name": "Malloc0" 00:07:08.302 }, 00:07:08.302 { 00:07:08.302 "nbd_device": "/dev/nbd1", 00:07:08.302 "bdev_name": "Malloc1" 00:07:08.302 } 00:07:08.302 ]' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.302 { 00:07:08.302 "nbd_device": "/dev/nbd0", 00:07:08.302 "bdev_name": "Malloc0" 00:07:08.302 }, 00:07:08.302 { 00:07:08.302 "nbd_device": "/dev/nbd1", 00:07:08.302 "bdev_name": "Malloc1" 00:07:08.302 } 00:07:08.302 ]' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.302 /dev/nbd1' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.302 /dev/nbd1' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.302 256+0 records in 00:07:08.302 256+0 records out 00:07:08.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531595 s, 197 MB/s 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.302 19:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.560 256+0 records in 00:07:08.560 256+0 records out 00:07:08.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206085 s, 50.9 MB/s 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.560 256+0 records in 00:07:08.560 256+0 records out 00:07:08.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220996 s, 47.4 MB/s 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.560 19:05:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.561 19:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.818 19:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.076 19:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.334 19:05:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.334 19:05:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.594 19:05:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.853 [2024-12-06 19:05:20.341736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.853 [2024-12-06 19:05:20.395471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.853 [2024-12-06 19:05:20.395471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.112 [2024-12-06 19:05:20.451863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.112 [2024-12-06 19:05:20.451927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.651 19:05:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.651 19:05:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:12.651 spdk_app_start Round 1 00:07:12.651 19:05:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 996882 /var/tmp/spdk-nbd.sock 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 996882 ']' 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.651 19:05:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.909 19:05:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.909 19:05:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.909 19:05:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.167 Malloc0 00:07:13.167 19:05:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.426 Malloc1 00:07:13.426 19:05:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.426 19:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.685 /dev/nbd0 00:07:13.944 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.944 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.944 1+0 records in 00:07:13.944 1+0 records out 00:07:13.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265934 s, 15.4 MB/s 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.944 19:05:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.944 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.944 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.944 19:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:14.202 /dev/nbd1 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.202 1+0 records in 00:07:14.202 1+0 records out 00:07:14.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162441 s, 25.2 MB/s 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.202 19:05:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.202 19:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.460 { 00:07:14.460 "nbd_device": "/dev/nbd0", 00:07:14.460 "bdev_name": "Malloc0" 00:07:14.460 }, 00:07:14.460 { 00:07:14.460 "nbd_device": "/dev/nbd1", 00:07:14.460 "bdev_name": "Malloc1" 00:07:14.460 } 00:07:14.460 ]' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.460 { 00:07:14.460 "nbd_device": "/dev/nbd0", 00:07:14.460 "bdev_name": "Malloc0" 00:07:14.460 }, 00:07:14.460 { 00:07:14.460 "nbd_device": "/dev/nbd1", 00:07:14.460 "bdev_name": "Malloc1" 00:07:14.460 } 00:07:14.460 ]' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.460 /dev/nbd1' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.460 /dev/nbd1' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.460 256+0 records in 00:07:14.460 256+0 records out 00:07:14.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410061 s, 256 MB/s 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.460 256+0 records in 00:07:14.460 256+0 records out 00:07:14.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204321 s, 51.3 MB/s 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.460 256+0 records in 00:07:14.460 256+0 records out 00:07:14.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02199 s, 47.7 MB/s 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.460 19:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.461 19:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.718 19:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.285 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.543 19:05:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.543 19:05:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.803 19:05:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.062 [2024-12-06 19:05:26.403403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.062 [2024-12-06 19:05:26.457101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.062 [2024-12-06 19:05:26.457102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.062 [2024-12-06 19:05:26.515713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.062 [2024-12-06 19:05:26.515792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.351 19:05:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:19.351 19:05:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:19.351 spdk_app_start Round 2 00:07:19.351 19:05:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 996882 /var/tmp/spdk-nbd.sock 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 996882 ']' 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.351 19:05:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:19.351 19:05:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.351 Malloc0 00:07:19.351 19:05:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.609 Malloc1 00:07:19.609 19:05:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.609 19:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.866 /dev/nbd0 00:07:19.866 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.866 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.866 1+0 records in 00:07:19.866 1+0 records out 00:07:19.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247516 s, 16.5 MB/s 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.866 19:05:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.866 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.866 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.866 19:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.124 /dev/nbd1 00:07:20.124 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.124 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.124 1+0 records in 00:07:20.124 1+0 records out 00:07:20.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173179 s, 23.7 MB/s 00:07:20.124 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.383 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:20.383 19:05:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:20.383 19:05:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.383 19:05:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:20.383 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.383 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.383 19:05:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.383 19:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.383 19:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.642 19:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.642 { 00:07:20.642 "nbd_device": "/dev/nbd0", 00:07:20.642 "bdev_name": "Malloc0" 00:07:20.642 }, 00:07:20.642 { 00:07:20.642 "nbd_device": "/dev/nbd1", 00:07:20.642 "bdev_name": "Malloc1" 00:07:20.642 } 00:07:20.642 ]' 00:07:20.642 19:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.642 { 00:07:20.642 "nbd_device": "/dev/nbd0", 00:07:20.642 "bdev_name": "Malloc0" 00:07:20.642 }, 00:07:20.642 { 00:07:20.642 "nbd_device": "/dev/nbd1", 00:07:20.642 "bdev_name": "Malloc1" 00:07:20.642 } 00:07:20.642 ]' 00:07:20.642 19:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.642 /dev/nbd1' 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.642 /dev/nbd1' 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.642 256+0 records in 00:07:20.642 256+0 records out 00:07:20.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433985 s, 242 MB/s 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.642 256+0 records in 00:07:20.642 256+0 records out 00:07:20.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199826 s, 52.5 MB/s 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.642 256+0 records in 00:07:20.642 256+0 records out 00:07:20.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221074 s, 47.4 MB/s 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.642 19:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.643 19:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.900 19:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.158 19:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.415 19:05:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.415 19:05:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.985 19:05:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.985 [2024-12-06 19:05:32.487137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.985 [2024-12-06 19:05:32.541786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.985 [2024-12-06 19:05:32.541789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.243 [2024-12-06 19:05:32.600752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:22.243 [2024-12-06 19:05:32.600820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.783 19:05:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 996882 /var/tmp/spdk-nbd.sock 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 996882 ']' 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.783 19:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:25.042 19:05:35 event.app_repeat -- event/event.sh@39 -- # killprocess 996882 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 996882 ']' 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 996882 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 996882 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 996882' 00:07:25.042 killing process with pid 996882 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@973 -- # kill 996882 00:07:25.042 19:05:35 event.app_repeat -- common/autotest_common.sh@978 -- # wait 996882 00:07:25.300 spdk_app_start is called in Round 0. 00:07:25.300 Shutdown signal received, stop current app iteration 00:07:25.300 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:07:25.300 spdk_app_start is called in Round 1. 00:07:25.300 Shutdown signal received, stop current app iteration 00:07:25.300 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:07:25.300 spdk_app_start is called in Round 2. 00:07:25.300 Shutdown signal received, stop current app iteration 00:07:25.300 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:07:25.300 spdk_app_start is called in Round 3. 00:07:25.300 Shutdown signal received, stop current app iteration 00:07:25.300 19:05:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:25.300 19:05:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:25.300 00:07:25.300 real 0m18.741s 00:07:25.300 user 0m41.625s 00:07:25.300 sys 0m3.218s 00:07:25.300 19:05:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.300 19:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.300 ************************************ 00:07:25.300 END TEST app_repeat 00:07:25.300 ************************************ 00:07:25.300 19:05:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:25.300 19:05:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:25.300 19:05:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.300 19:05:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.300 19:05:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.300 ************************************ 00:07:25.300 START TEST cpu_locks 00:07:25.300 ************************************ 00:07:25.300 19:05:35 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:25.559 * Looking for test storage... 00:07:25.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.559 19:05:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.559 --rc genhtml_branch_coverage=1 00:07:25.559 --rc genhtml_function_coverage=1 00:07:25.559 --rc genhtml_legend=1 00:07:25.559 --rc geninfo_all_blocks=1 00:07:25.559 --rc geninfo_unexecuted_blocks=1 00:07:25.559 00:07:25.559 ' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.559 --rc genhtml_branch_coverage=1 00:07:25.559 --rc genhtml_function_coverage=1 00:07:25.559 --rc genhtml_legend=1 00:07:25.559 --rc geninfo_all_blocks=1 00:07:25.559 --rc geninfo_unexecuted_blocks=1 00:07:25.559 00:07:25.559 ' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.559 --rc genhtml_branch_coverage=1 00:07:25.559 --rc genhtml_function_coverage=1 00:07:25.559 --rc genhtml_legend=1 00:07:25.559 --rc geninfo_all_blocks=1 00:07:25.559 --rc geninfo_unexecuted_blocks=1 00:07:25.559 00:07:25.559 ' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.559 --rc genhtml_branch_coverage=1 00:07:25.559 --rc genhtml_function_coverage=1 00:07:25.559 --rc genhtml_legend=1 00:07:25.559 --rc geninfo_all_blocks=1 00:07:25.559 --rc geninfo_unexecuted_blocks=1 00:07:25.559 00:07:25.559 ' 00:07:25.559 19:05:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:25.559 19:05:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:25.559 19:05:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:25.559 19:05:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.559 19:05:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.559 ************************************ 00:07:25.559 START TEST default_locks 00:07:25.559 ************************************ 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=999364 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 999364 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 999364 ']' 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.559 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.559 [2024-12-06 19:05:36.062124] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:25.559 [2024-12-06 19:05:36.062221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999364 ] 00:07:25.559 [2024-12-06 19:05:36.126348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.818 [2024-12-06 19:05:36.186548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.102 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.102 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:26.102 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 999364 00:07:26.102 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 999364 00:07:26.102 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.364 lslocks: write error 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 999364 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 999364 ']' 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 999364 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999364 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999364' 00:07:26.364 killing process with pid 999364 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 999364 00:07:26.364 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 999364 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 999364 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 999364 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 999364 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 999364 ']' 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (999364) - No such process 00:07:26.624 ERROR: process (pid: 999364) is no longer running 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:26.624 00:07:26.624 real 0m1.153s 00:07:26.624 user 0m1.131s 00:07:26.624 sys 0m0.501s 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.624 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.624 ************************************ 00:07:26.624 END TEST default_locks 00:07:26.624 ************************************ 00:07:26.624 19:05:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:26.624 19:05:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.624 19:05:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.624 19:05:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.883 ************************************ 00:07:26.883 START TEST default_locks_via_rpc 00:07:26.883 ************************************ 00:07:26.883 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:26.883 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=999528 00:07:26.883 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.883 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 999528 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 999528 ']' 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.884 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.884 [2024-12-06 19:05:37.268018] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:26.884 [2024-12-06 19:05:37.268096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999528 ] 00:07:26.884 [2024-12-06 19:05:37.333243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.884 [2024-12-06 19:05:37.389403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.143 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.143 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:27.143 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:27.143 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 999528 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 999528 00:07:27.144 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 999528 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 999528 ']' 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 999528 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999528 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999528' 00:07:27.404 killing process with pid 999528 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 999528 00:07:27.404 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 999528 00:07:27.973 00:07:27.973 real 0m1.134s 00:07:27.973 user 0m1.117s 00:07:27.973 sys 0m0.487s 00:07:27.973 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.973 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.973 ************************************ 00:07:27.973 END TEST default_locks_via_rpc 00:07:27.973 ************************************ 00:07:27.973 19:05:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:27.973 19:05:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.973 19:05:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.973 19:05:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.973 ************************************ 00:07:27.973 START TEST non_locking_app_on_locked_coremask 00:07:27.973 ************************************ 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=999698 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 999698 /var/tmp/spdk.sock 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 999698 ']' 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.973 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.973 [2024-12-06 19:05:38.455662] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:27.973 [2024-12-06 19:05:38.455774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999698 ] 00:07:27.973 [2024-12-06 19:05:38.520143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.231 [2024-12-06 19:05:38.579586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=999822 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 999822 /var/tmp/spdk2.sock 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 999822 ']' 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.490 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.490 [2024-12-06 19:05:38.889234] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:28.490 [2024-12-06 19:05:38.889324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999822 ] 00:07:28.491 [2024-12-06 19:05:38.983743] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.491 [2024-12-06 19:05:38.983770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.750 [2024-12-06 19:05:39.096240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.317 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.317 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.317 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 999698 00:07:29.317 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 999698 00:07:29.317 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.886 lslocks: write error 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 999698 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 999698 ']' 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 999698 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999698 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999698' 00:07:29.886 killing process with pid 999698 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 999698 00:07:29.886 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 999698 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 999822 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 999822 ']' 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 999822 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999822 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999822' 00:07:30.824 killing process with pid 999822 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 999822 00:07:30.824 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 999822 00:07:31.394 00:07:31.394 real 0m3.313s 00:07:31.394 user 0m3.569s 00:07:31.394 sys 0m1.043s 00:07:31.394 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.394 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 ************************************ 00:07:31.394 END TEST non_locking_app_on_locked_coremask 00:07:31.394 ************************************ 00:07:31.394 19:05:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.394 19:05:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.394 19:05:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.394 19:05:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 ************************************ 00:07:31.394 START TEST locking_app_on_unlocked_coremask 00:07:31.394 ************************************ 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1000129 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1000129 /var/tmp/spdk.sock 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000129 ']' 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.394 19:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 [2024-12-06 19:05:41.822620] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:31.394 [2024-12-06 19:05:41.822724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000129 ] 00:07:31.394 [2024-12-06 19:05:41.888140] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.394 [2024-12-06 19:05:41.888181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.394 [2024-12-06 19:05:41.945539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1000257 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1000257 /var/tmp/spdk2.sock 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000257 ']' 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.652 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.913 [2024-12-06 19:05:42.259816] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:31.913 [2024-12-06 19:05:42.259899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000257 ] 00:07:31.913 [2024-12-06 19:05:42.357029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.913 [2024-12-06 19:05:42.468676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.874 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.874 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.874 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1000257 00:07:32.874 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1000257 00:07:32.874 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.131 lslocks: write error 00:07:33.131 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1000129 00:07:33.132 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1000129 ']' 00:07:33.132 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1000129 00:07:33.132 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.132 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.132 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000129 00:07:33.392 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.392 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.392 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000129' 00:07:33.392 killing process with pid 1000129 00:07:33.392 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1000129 00:07:33.392 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1000129 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1000257 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1000257 ']' 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1000257 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000257 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000257' 00:07:34.330 killing process with pid 1000257 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1000257 00:07:34.330 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1000257 00:07:34.588 00:07:34.588 real 0m3.243s 00:07:34.588 user 0m3.480s 00:07:34.588 sys 0m1.049s 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.588 ************************************ 00:07:34.588 END TEST locking_app_on_unlocked_coremask 00:07:34.588 ************************************ 00:07:34.588 19:05:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:34.588 19:05:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.588 19:05:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.588 19:05:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.588 ************************************ 00:07:34.588 START TEST locking_app_on_locked_coremask 00:07:34.588 ************************************ 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1000563 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1000563 /var/tmp/spdk.sock 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000563 ']' 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.588 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.588 [2024-12-06 19:05:45.117350] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:34.588 [2024-12-06 19:05:45.117413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000563 ] 00:07:34.911 [2024-12-06 19:05:45.180552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.911 [2024-12-06 19:05:45.241945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1000691 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1000691 /var/tmp/spdk2.sock 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1000691 /var/tmp/spdk2.sock 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1000691 /var/tmp/spdk2.sock 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000691 ']' 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.168 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.168 [2024-12-06 19:05:45.554730] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:35.168 [2024-12-06 19:05:45.554821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000691 ] 00:07:35.168 [2024-12-06 19:05:45.648743] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1000563 has claimed it. 00:07:35.168 [2024-12-06 19:05:45.648798] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1000691) - No such process 00:07:35.734 ERROR: process (pid: 1000691) is no longer running 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1000563 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1000563 00:07:35.734 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.993 lslocks: write error 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1000563 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1000563 ']' 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1000563 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000563 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000563' 00:07:35.993 killing process with pid 1000563 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1000563 00:07:35.993 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1000563 00:07:36.558 00:07:36.558 real 0m1.890s 00:07:36.558 user 0m2.113s 00:07:36.558 sys 0m0.588s 00:07:36.558 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.558 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.558 ************************************ 00:07:36.558 END TEST locking_app_on_locked_coremask 00:07:36.558 ************************************ 00:07:36.558 19:05:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.558 19:05:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.558 19:05:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.558 19:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.558 ************************************ 00:07:36.558 START TEST locking_overlapped_coremask 00:07:36.558 ************************************ 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1000861 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1000861 /var/tmp/spdk.sock 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000861 ']' 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.558 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.558 [2024-12-06 19:05:47.057069] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:36.558 [2024-12-06 19:05:47.057148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000861 ] 00:07:36.558 [2024-12-06 19:05:47.117616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.817 [2024-12-06 19:05:47.175334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.817 [2024-12-06 19:05:47.175398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.817 [2024-12-06 19:05:47.175402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1000899 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1000899 /var/tmp/spdk2.sock 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1000899 /var/tmp/spdk2.sock 00:07:37.075 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1000899 /var/tmp/spdk2.sock 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1000899 ']' 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.076 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.076 [2024-12-06 19:05:47.510769] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:37.076 [2024-12-06 19:05:47.510857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000899 ] 00:07:37.076 [2024-12-06 19:05:47.615894] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1000861 has claimed it. 00:07:37.076 [2024-12-06 19:05:47.615971] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1000899) - No such process 00:07:38.013 ERROR: process (pid: 1000899) is no longer running 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1000861 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1000861 ']' 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1000861 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000861 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000861' 00:07:38.013 killing process with pid 1000861 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1000861 00:07:38.013 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1000861 00:07:38.274 00:07:38.274 real 0m1.688s 00:07:38.274 user 0m4.737s 00:07:38.274 sys 0m0.461s 00:07:38.274 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.274 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.274 ************************************ 00:07:38.274 END TEST locking_overlapped_coremask 00:07:38.274 ************************************ 00:07:38.274 19:05:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:38.274 19:05:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.274 19:05:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.274 19:05:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.274 ************************************ 00:07:38.274 START TEST locking_overlapped_coremask_via_rpc 00:07:38.275 ************************************ 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1001147 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1001147 /var/tmp/spdk.sock 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1001147 ']' 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.275 19:05:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.275 [2024-12-06 19:05:48.790701] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:38.275 [2024-12-06 19:05:48.790800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001147 ] 00:07:38.534 [2024-12-06 19:05:48.857825] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.534 [2024-12-06 19:05:48.857865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.534 [2024-12-06 19:05:48.919786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.534 [2024-12-06 19:05:48.919852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.534 [2024-12-06 19:05:48.919856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1001166 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1001166 /var/tmp/spdk2.sock 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1001166 ']' 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:38.794 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.794 [2024-12-06 19:05:49.261169] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:38.794 [2024-12-06 19:05:49.261251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001166 ] 00:07:38.794 [2024-12-06 19:05:49.363398] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.794 [2024-12-06 19:05:49.363431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.054 [2024-12-06 19:05:49.485181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.054 [2024-12-06 19:05:49.488722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.054 [2024-12-06 19:05:49.488725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.993 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.993 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.993 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.993 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.994 [2024-12-06 19:05:50.255781] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1001147 has claimed it. 00:07:39.994 request: 00:07:39.994 { 00:07:39.994 "method": "framework_enable_cpumask_locks", 00:07:39.994 "req_id": 1 00:07:39.994 } 00:07:39.994 Got JSON-RPC error response 00:07:39.994 response: 00:07:39.994 { 00:07:39.994 "code": -32603, 00:07:39.994 "message": "Failed to claim CPU core: 2" 00:07:39.994 } 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1001147 /var/tmp/spdk.sock 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1001147 ']' 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1001166 /var/tmp/spdk2.sock 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1001166 ']' 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.994 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.253 00:07:40.253 real 0m2.059s 00:07:40.253 user 0m1.112s 00:07:40.253 sys 0m0.203s 00:07:40.253 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.254 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.254 ************************************ 00:07:40.254 END TEST locking_overlapped_coremask_via_rpc 00:07:40.254 ************************************ 00:07:40.254 19:05:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.254 19:05:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1001147 ]] 00:07:40.254 19:05:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1001147 00:07:40.254 19:05:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1001147 ']' 00:07:40.254 19:05:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1001147 00:07:40.254 19:05:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.254 19:05:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.254 19:05:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001147 00:07:40.513 19:05:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.513 19:05:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.513 19:05:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001147' 00:07:40.513 killing process with pid 1001147 00:07:40.513 19:05:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1001147 00:07:40.513 19:05:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1001147 00:07:40.772 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1001166 ]] 00:07:40.772 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1001166 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1001166 ']' 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1001166 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001166 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001166' 00:07:40.772 killing process with pid 1001166 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1001166 00:07:40.772 19:05:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1001166 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1001147 ]] 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1001147 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1001147 ']' 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1001147 00:07:41.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1001147) - No such process 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1001147 is not found' 00:07:41.342 Process with pid 1001147 is not found 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1001166 ]] 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1001166 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1001166 ']' 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1001166 00:07:41.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1001166) - No such process 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1001166 is not found' 00:07:41.342 Process with pid 1001166 is not found 00:07:41.342 19:05:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.342 00:07:41.342 real 0m15.927s 00:07:41.342 user 0m28.932s 00:07:41.342 sys 0m5.288s 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.342 19:05:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.342 ************************************ 00:07:41.342 END TEST cpu_locks 00:07:41.342 ************************************ 00:07:41.342 00:07:41.342 real 0m40.553s 00:07:41.342 user 1m19.672s 00:07:41.342 sys 0m9.300s 00:07:41.342 19:05:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.342 19:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.342 ************************************ 00:07:41.342 END TEST event 00:07:41.342 ************************************ 00:07:41.342 19:05:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:41.342 19:05:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.342 19:05:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.342 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:07:41.342 ************************************ 00:07:41.342 START TEST thread 00:07:41.342 ************************************ 00:07:41.342 19:05:51 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:41.342 * Looking for test storage... 00:07:41.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:41.342 19:05:51 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.342 19:05:51 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.342 19:05:51 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.603 19:05:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.603 19:05:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.603 19:05:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.603 19:05:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.603 19:05:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.603 19:05:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.603 19:05:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.603 19:05:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.603 19:05:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.603 19:05:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.603 19:05:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.603 19:05:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:41.603 19:05:51 thread -- scripts/common.sh@345 -- # : 1 00:07:41.603 19:05:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.603 19:05:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.603 19:05:51 thread -- scripts/common.sh@365 -- # decimal 1 00:07:41.603 19:05:51 thread -- scripts/common.sh@353 -- # local d=1 00:07:41.603 19:05:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.603 19:05:51 thread -- scripts/common.sh@355 -- # echo 1 00:07:41.603 19:05:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.603 19:05:51 thread -- scripts/common.sh@366 -- # decimal 2 00:07:41.603 19:05:51 thread -- scripts/common.sh@353 -- # local d=2 00:07:41.603 19:05:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.603 19:05:51 thread -- scripts/common.sh@355 -- # echo 2 00:07:41.603 19:05:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.603 19:05:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.603 19:05:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.603 19:05:51 thread -- scripts/common.sh@368 -- # return 0 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.603 --rc genhtml_branch_coverage=1 00:07:41.603 --rc genhtml_function_coverage=1 00:07:41.603 --rc genhtml_legend=1 00:07:41.603 --rc geninfo_all_blocks=1 00:07:41.603 --rc geninfo_unexecuted_blocks=1 00:07:41.603 00:07:41.603 ' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.603 --rc genhtml_branch_coverage=1 00:07:41.603 --rc genhtml_function_coverage=1 00:07:41.603 --rc genhtml_legend=1 00:07:41.603 --rc geninfo_all_blocks=1 00:07:41.603 --rc geninfo_unexecuted_blocks=1 00:07:41.603 00:07:41.603 ' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.603 --rc genhtml_branch_coverage=1 00:07:41.603 --rc genhtml_function_coverage=1 00:07:41.603 --rc genhtml_legend=1 00:07:41.603 --rc geninfo_all_blocks=1 00:07:41.603 --rc geninfo_unexecuted_blocks=1 00:07:41.603 00:07:41.603 ' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.603 --rc genhtml_branch_coverage=1 00:07:41.603 --rc genhtml_function_coverage=1 00:07:41.603 --rc genhtml_legend=1 00:07:41.603 --rc geninfo_all_blocks=1 00:07:41.603 --rc geninfo_unexecuted_blocks=1 00:07:41.603 00:07:41.603 ' 00:07:41.603 19:05:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.603 19:05:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.603 ************************************ 00:07:41.603 START TEST thread_poller_perf 00:07:41.603 ************************************ 00:07:41.603 19:05:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.603 [2024-12-06 19:05:52.023326] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:41.603 [2024-12-06 19:05:52.023393] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001645 ] 00:07:41.603 [2024-12-06 19:05:52.088309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.603 [2024-12-06 19:05:52.144551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.603 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:43.011 [2024-12-06T18:05:53.588Z] ====================================== 00:07:43.011 [2024-12-06T18:05:53.588Z] busy:2707965273 (cyc) 00:07:43.011 [2024-12-06T18:05:53.588Z] total_run_count: 364000 00:07:43.011 [2024-12-06T18:05:53.588Z] tsc_hz: 2700000000 (cyc) 00:07:43.011 [2024-12-06T18:05:53.588Z] ====================================== 00:07:43.011 [2024-12-06T18:05:53.588Z] poller_cost: 7439 (cyc), 2755 (nsec) 00:07:43.011 00:07:43.011 real 0m1.203s 00:07:43.011 user 0m1.139s 00:07:43.011 sys 0m0.059s 00:07:43.011 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.011 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.011 ************************************ 00:07:43.011 END TEST thread_poller_perf 00:07:43.011 ************************************ 00:07:43.011 19:05:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.011 19:05:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:43.011 19:05:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.011 19:05:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.011 ************************************ 00:07:43.011 START TEST thread_poller_perf 00:07:43.011 ************************************ 00:07:43.011 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.011 [2024-12-06 19:05:53.270008] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:43.011 [2024-12-06 19:05:53.270073] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001815 ] 00:07:43.011 [2024-12-06 19:05:53.331692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.011 [2024-12-06 19:05:53.387630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.011 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:43.948 [2024-12-06T18:05:54.525Z] ====================================== 00:07:43.948 [2024-12-06T18:05:54.525Z] busy:2701933197 (cyc) 00:07:43.948 [2024-12-06T18:05:54.525Z] total_run_count: 4449000 00:07:43.948 [2024-12-06T18:05:54.525Z] tsc_hz: 2700000000 (cyc) 00:07:43.948 [2024-12-06T18:05:54.525Z] ====================================== 00:07:43.948 [2024-12-06T18:05:54.525Z] poller_cost: 607 (cyc), 224 (nsec) 00:07:43.948 00:07:43.948 real 0m1.193s 00:07:43.948 user 0m1.131s 00:07:43.948 sys 0m0.057s 00:07:43.948 19:05:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.948 19:05:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 END TEST thread_poller_perf 00:07:43.948 ************************************ 00:07:43.948 19:05:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:43.948 00:07:43.948 real 0m2.641s 00:07:43.948 user 0m2.407s 00:07:43.948 sys 0m0.239s 00:07:43.948 19:05:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.948 19:05:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 END TEST thread 00:07:43.948 ************************************ 00:07:43.948 19:05:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:43.948 19:05:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.948 19:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.948 19:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.948 19:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 ************************************ 00:07:44.207 START TEST app_cmdline 00:07:44.207 ************************************ 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.207 * Looking for test storage... 00:07:44.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.207 19:05:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.207 --rc genhtml_branch_coverage=1 00:07:44.207 --rc genhtml_function_coverage=1 00:07:44.207 --rc genhtml_legend=1 00:07:44.207 --rc geninfo_all_blocks=1 00:07:44.207 --rc geninfo_unexecuted_blocks=1 00:07:44.207 00:07:44.207 ' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.207 --rc genhtml_branch_coverage=1 00:07:44.207 --rc genhtml_function_coverage=1 00:07:44.207 --rc genhtml_legend=1 00:07:44.207 --rc geninfo_all_blocks=1 00:07:44.207 --rc geninfo_unexecuted_blocks=1 00:07:44.207 00:07:44.207 ' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.207 --rc genhtml_branch_coverage=1 00:07:44.207 --rc genhtml_function_coverage=1 00:07:44.207 --rc genhtml_legend=1 00:07:44.207 --rc geninfo_all_blocks=1 00:07:44.207 --rc geninfo_unexecuted_blocks=1 00:07:44.207 00:07:44.207 ' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.207 --rc genhtml_branch_coverage=1 00:07:44.207 --rc genhtml_function_coverage=1 00:07:44.207 --rc genhtml_legend=1 00:07:44.207 --rc geninfo_all_blocks=1 00:07:44.207 --rc geninfo_unexecuted_blocks=1 00:07:44.207 00:07:44.207 ' 00:07:44.207 19:05:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.207 19:05:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1002019 00:07:44.207 19:05:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.207 19:05:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1002019 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1002019 ']' 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.207 19:05:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 [2024-12-06 19:05:54.718250] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:44.207 [2024-12-06 19:05:54.718329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002019 ] 00:07:44.207 [2024-12-06 19:05:54.781026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.467 [2024-12-06 19:05:54.837845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.725 19:05:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.725 19:05:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:44.725 19:05:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:44.983 { 00:07:44.983 "version": "SPDK v25.01-pre git sha1 1148849d6", 00:07:44.983 "fields": { 00:07:44.983 "major": 25, 00:07:44.983 "minor": 1, 00:07:44.983 "patch": 0, 00:07:44.983 "suffix": "-pre", 00:07:44.983 "commit": "1148849d6" 00:07:44.983 } 00:07:44.983 } 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.983 19:05:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.983 19:05:55 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.243 request: 00:07:45.243 { 00:07:45.243 "method": "env_dpdk_get_mem_stats", 00:07:45.243 "req_id": 1 00:07:45.243 } 00:07:45.243 Got JSON-RPC error response 00:07:45.243 response: 00:07:45.243 { 00:07:45.243 "code": -32601, 00:07:45.243 "message": "Method not found" 00:07:45.243 } 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.243 19:05:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1002019 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1002019 ']' 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1002019 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1002019 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1002019' 00:07:45.243 killing process with pid 1002019 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 1002019 00:07:45.243 19:05:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 1002019 00:07:45.866 00:07:45.866 real 0m1.598s 00:07:45.866 user 0m1.977s 00:07:45.866 sys 0m0.468s 00:07:45.866 19:05:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.866 19:05:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.866 ************************************ 00:07:45.866 END TEST app_cmdline 00:07:45.866 ************************************ 00:07:45.866 19:05:56 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.866 19:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.866 19:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.866 19:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.866 ************************************ 00:07:45.866 START TEST version 00:07:45.866 ************************************ 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.866 * Looking for test storage... 00:07:45.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.866 19:05:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.866 19:05:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.866 19:05:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.866 19:05:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.866 19:05:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.866 19:05:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.866 19:05:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.866 19:05:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.866 19:05:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.866 19:05:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.866 19:05:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.866 19:05:56 version -- scripts/common.sh@344 -- # case "$op" in 00:07:45.866 19:05:56 version -- scripts/common.sh@345 -- # : 1 00:07:45.866 19:05:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.866 19:05:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.866 19:05:56 version -- scripts/common.sh@365 -- # decimal 1 00:07:45.866 19:05:56 version -- scripts/common.sh@353 -- # local d=1 00:07:45.866 19:05:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.866 19:05:56 version -- scripts/common.sh@355 -- # echo 1 00:07:45.866 19:05:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.866 19:05:56 version -- scripts/common.sh@366 -- # decimal 2 00:07:45.866 19:05:56 version -- scripts/common.sh@353 -- # local d=2 00:07:45.866 19:05:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.866 19:05:56 version -- scripts/common.sh@355 -- # echo 2 00:07:45.866 19:05:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.866 19:05:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.866 19:05:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.866 19:05:56 version -- scripts/common.sh@368 -- # return 0 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.866 --rc genhtml_branch_coverage=1 00:07:45.866 --rc genhtml_function_coverage=1 00:07:45.866 --rc genhtml_legend=1 00:07:45.866 --rc geninfo_all_blocks=1 00:07:45.866 --rc geninfo_unexecuted_blocks=1 00:07:45.866 00:07:45.866 ' 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.866 --rc genhtml_branch_coverage=1 00:07:45.866 --rc genhtml_function_coverage=1 00:07:45.866 --rc genhtml_legend=1 00:07:45.866 --rc geninfo_all_blocks=1 00:07:45.866 --rc geninfo_unexecuted_blocks=1 00:07:45.866 00:07:45.866 ' 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.866 --rc genhtml_branch_coverage=1 00:07:45.866 --rc genhtml_function_coverage=1 00:07:45.866 --rc genhtml_legend=1 00:07:45.866 --rc geninfo_all_blocks=1 00:07:45.866 --rc geninfo_unexecuted_blocks=1 00:07:45.866 00:07:45.866 ' 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.866 --rc genhtml_branch_coverage=1 00:07:45.866 --rc genhtml_function_coverage=1 00:07:45.866 --rc genhtml_legend=1 00:07:45.866 --rc geninfo_all_blocks=1 00:07:45.866 --rc geninfo_unexecuted_blocks=1 00:07:45.866 00:07:45.866 ' 00:07:45.866 19:05:56 version -- app/version.sh@17 -- # get_header_version major 00:07:45.866 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.866 19:05:56 version -- app/version.sh@17 -- # major=25 00:07:45.866 19:05:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:45.866 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.866 19:05:56 version -- app/version.sh@18 -- # minor=1 00:07:45.866 19:05:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:45.866 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.866 19:05:56 version -- app/version.sh@19 -- # patch=0 00:07:45.866 19:05:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:45.866 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:45.866 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.866 19:05:56 version -- app/version.sh@20 -- # suffix=-pre 00:07:45.866 19:05:56 version -- app/version.sh@22 -- # version=25.1 00:07:45.866 19:05:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:45.866 19:05:56 version -- app/version.sh@28 -- # version=25.1rc0 00:07:45.866 19:05:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.866 19:05:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.866 19:05:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:45.866 19:05:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:45.866 00:07:45.866 real 0m0.199s 00:07:45.866 user 0m0.132s 00:07:45.866 sys 0m0.093s 00:07:45.866 19:05:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.866 19:05:56 version -- common/autotest_common.sh@10 -- # set +x 00:07:45.866 ************************************ 00:07:45.866 END TEST version 00:07:45.867 ************************************ 00:07:45.867 19:05:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:45.867 19:05:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:45.867 19:05:56 -- spdk/autotest.sh@194 -- # uname -s 00:07:45.867 19:05:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:45.867 19:05:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.867 19:05:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.867 19:05:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:45.867 19:05:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.867 19:05:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.867 19:05:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.867 19:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:46.190 19:05:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:46.190 19:05:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:46.190 19:05:56 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:46.190 19:05:56 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:46.190 19:05:56 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:46.190 19:05:56 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:46.190 19:05:56 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.190 19:05:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.190 19:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.190 19:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:46.190 ************************************ 00:07:46.190 START TEST nvmf_tcp 00:07:46.190 ************************************ 00:07:46.190 19:05:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.190 * Looking for test storage... 00:07:46.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.190 19:05:56 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.190 19:05:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.190 19:05:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.190 19:05:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.190 19:05:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.191 19:05:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.191 --rc genhtml_branch_coverage=1 00:07:46.191 --rc genhtml_function_coverage=1 00:07:46.191 --rc genhtml_legend=1 00:07:46.191 --rc geninfo_all_blocks=1 00:07:46.191 --rc geninfo_unexecuted_blocks=1 00:07:46.191 00:07:46.191 ' 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.191 --rc genhtml_branch_coverage=1 00:07:46.191 --rc genhtml_function_coverage=1 00:07:46.191 --rc genhtml_legend=1 00:07:46.191 --rc geninfo_all_blocks=1 00:07:46.191 --rc geninfo_unexecuted_blocks=1 00:07:46.191 00:07:46.191 ' 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.191 --rc genhtml_branch_coverage=1 00:07:46.191 --rc genhtml_function_coverage=1 00:07:46.191 --rc genhtml_legend=1 00:07:46.191 --rc geninfo_all_blocks=1 00:07:46.191 --rc geninfo_unexecuted_blocks=1 00:07:46.191 00:07:46.191 ' 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.191 --rc genhtml_branch_coverage=1 00:07:46.191 --rc genhtml_function_coverage=1 00:07:46.191 --rc genhtml_legend=1 00:07:46.191 --rc geninfo_all_blocks=1 00:07:46.191 --rc geninfo_unexecuted_blocks=1 00:07:46.191 00:07:46.191 ' 00:07:46.191 19:05:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.191 19:05:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.191 19:05:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.191 19:05:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.191 ************************************ 00:07:46.191 START TEST nvmf_target_core 00:07:46.191 ************************************ 00:07:46.191 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.191 * Looking for test storage... 00:07:46.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.191 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.191 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.191 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.474 --rc genhtml_branch_coverage=1 00:07:46.474 --rc genhtml_function_coverage=1 00:07:46.474 --rc genhtml_legend=1 00:07:46.474 --rc geninfo_all_blocks=1 00:07:46.474 --rc geninfo_unexecuted_blocks=1 00:07:46.474 00:07:46.474 ' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.474 --rc genhtml_branch_coverage=1 00:07:46.474 --rc genhtml_function_coverage=1 00:07:46.474 --rc genhtml_legend=1 00:07:46.474 --rc geninfo_all_blocks=1 00:07:46.474 --rc geninfo_unexecuted_blocks=1 00:07:46.474 00:07:46.474 ' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.474 --rc genhtml_branch_coverage=1 00:07:46.474 --rc genhtml_function_coverage=1 00:07:46.474 --rc genhtml_legend=1 00:07:46.474 --rc geninfo_all_blocks=1 00:07:46.474 --rc geninfo_unexecuted_blocks=1 00:07:46.474 00:07:46.474 ' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.474 --rc genhtml_branch_coverage=1 00:07:46.474 --rc genhtml_function_coverage=1 00:07:46.474 --rc genhtml_legend=1 00:07:46.474 --rc geninfo_all_blocks=1 00:07:46.474 --rc geninfo_unexecuted_blocks=1 00:07:46.474 00:07:46.474 ' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.474 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.475 ************************************ 00:07:46.475 START TEST nvmf_abort 00:07:46.475 ************************************ 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.475 * Looking for test storage... 00:07:46.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.475 --rc genhtml_branch_coverage=1 00:07:46.475 --rc genhtml_function_coverage=1 00:07:46.475 --rc genhtml_legend=1 00:07:46.475 --rc geninfo_all_blocks=1 00:07:46.475 --rc geninfo_unexecuted_blocks=1 00:07:46.475 00:07:46.475 ' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.475 --rc genhtml_branch_coverage=1 00:07:46.475 --rc genhtml_function_coverage=1 00:07:46.475 --rc genhtml_legend=1 00:07:46.475 --rc geninfo_all_blocks=1 00:07:46.475 --rc geninfo_unexecuted_blocks=1 00:07:46.475 00:07:46.475 ' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.475 --rc genhtml_branch_coverage=1 00:07:46.475 --rc genhtml_function_coverage=1 00:07:46.475 --rc genhtml_legend=1 00:07:46.475 --rc geninfo_all_blocks=1 00:07:46.475 --rc geninfo_unexecuted_blocks=1 00:07:46.475 00:07:46.475 ' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.475 --rc genhtml_branch_coverage=1 00:07:46.475 --rc genhtml_function_coverage=1 00:07:46.475 --rc genhtml_legend=1 00:07:46.475 --rc geninfo_all_blocks=1 00:07:46.475 --rc geninfo_unexecuted_blocks=1 00:07:46.475 00:07:46.475 ' 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.475 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.476 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.009 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:07:49.010 00:07:49.010 --- 10.0.0.2 ping statistics --- 00:07:49.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.010 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:49.010 00:07:49.010 --- 10.0.0.1 ping statistics --- 00:07:49.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.010 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1004117 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1004117 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1004117 ']' 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.010 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.011 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.011 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.011 [2024-12-06 19:05:59.405477] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:49.011 [2024-12-06 19:05:59.405582] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.011 [2024-12-06 19:05:59.479251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.011 [2024-12-06 19:05:59.540853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.011 [2024-12-06 19:05:59.540918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.011 [2024-12-06 19:05:59.540947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.011 [2024-12-06 19:05:59.540959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.011 [2024-12-06 19:05:59.540970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.011 [2024-12-06 19:05:59.542462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.011 [2024-12-06 19:05:59.542518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.011 [2024-12-06 19:05:59.542522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 [2024-12-06 19:05:59.703717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 Malloc0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 Delay0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 [2024-12-06 19:05:59.782046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.268 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:49.542 [2024-12-06 19:05:59.898611] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:51.441 Initializing NVMe Controllers 00:07:51.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.442 controller IO queue size 128 less than required 00:07:51.442 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:51.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:51.442 Initialization complete. Launching workers. 00:07:51.442 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28025 00:07:51.442 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28090, failed to submit 62 00:07:51.442 success 28029, unsuccessful 61, failed 0 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.442 rmmod nvme_tcp 00:07:51.442 rmmod nvme_fabrics 00:07:51.442 rmmod nvme_keyring 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1004117 ']' 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1004117 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1004117 ']' 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1004117 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.442 19:06:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004117 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004117' 00:07:51.701 killing process with pid 1004117 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1004117 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1004117 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.701 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.240 00:07:54.240 real 0m7.504s 00:07:54.240 user 0m10.564s 00:07:54.240 sys 0m2.601s 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.240 ************************************ 00:07:54.240 END TEST nvmf_abort 00:07:54.240 ************************************ 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.240 ************************************ 00:07:54.240 START TEST nvmf_ns_hotplug_stress 00:07:54.240 ************************************ 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.240 * Looking for test storage... 00:07:54.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.240 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.240 --rc genhtml_branch_coverage=1 00:07:54.240 --rc genhtml_function_coverage=1 00:07:54.240 --rc genhtml_legend=1 00:07:54.241 --rc geninfo_all_blocks=1 00:07:54.241 --rc geninfo_unexecuted_blocks=1 00:07:54.241 00:07:54.241 ' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.241 --rc genhtml_branch_coverage=1 00:07:54.241 --rc genhtml_function_coverage=1 00:07:54.241 --rc genhtml_legend=1 00:07:54.241 --rc geninfo_all_blocks=1 00:07:54.241 --rc geninfo_unexecuted_blocks=1 00:07:54.241 00:07:54.241 ' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.241 --rc genhtml_branch_coverage=1 00:07:54.241 --rc genhtml_function_coverage=1 00:07:54.241 --rc genhtml_legend=1 00:07:54.241 --rc geninfo_all_blocks=1 00:07:54.241 --rc geninfo_unexecuted_blocks=1 00:07:54.241 00:07:54.241 ' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.241 --rc genhtml_branch_coverage=1 00:07:54.241 --rc genhtml_function_coverage=1 00:07:54.241 --rc genhtml_legend=1 00:07:54.241 --rc geninfo_all_blocks=1 00:07:54.241 --rc geninfo_unexecuted_blocks=1 00:07:54.241 00:07:54.241 ' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.241 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.242 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.242 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.242 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.242 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.242 19:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:56.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:56.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.143 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:56.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:56.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.144 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:56.144 00:07:56.144 --- 10.0.0.2 ping statistics --- 00:07:56.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.144 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:56.402 00:07:56.402 --- 10.0.0.1 ping statistics --- 00:07:56.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.402 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.402 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1006583 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1006583 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1006583 ']' 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.403 19:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.403 [2024-12-06 19:06:06.799141] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:56.403 [2024-12-06 19:06:06.799229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.403 [2024-12-06 19:06:06.869902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.403 [2024-12-06 19:06:06.926727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.403 [2024-12-06 19:06:06.926780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.403 [2024-12-06 19:06:06.926809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.403 [2024-12-06 19:06:06.926820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.403 [2024-12-06 19:06:06.926829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.403 [2024-12-06 19:06:06.928366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.403 [2024-12-06 19:06:06.928431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.403 [2024-12-06 19:06:06.928435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:56.661 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.920 [2024-12-06 19:06:07.320530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.920 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.178 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.436 [2024-12-06 19:06:07.911442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.436 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.693 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:57.951 Malloc0 00:07:57.951 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.514 Delay0 00:07:58.514 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.771 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:59.028 NULL1 00:07:59.028 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:59.287 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1007394 00:07:59.287 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:59.287 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:07:59.287 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.656 Read completed with error (sct=0, sc=11) 00:08:00.656 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.657 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:00.657 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:00.914 true 00:08:00.914 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:00.914 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.845 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.103 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:02.103 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:02.361 true 00:08:02.361 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:02.361 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.619 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.877 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:02.877 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:03.134 true 00:08:03.135 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:03.135 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.392 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.649 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:03.649 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:03.907 true 00:08:03.907 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:03.907 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.838 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.107 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:05.107 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:05.365 true 00:08:05.365 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:05.365 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.622 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.880 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:05.880 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:06.137 true 00:08:06.137 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:06.137 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.395 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.653 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:06.653 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:06.911 true 00:08:06.911 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:06.911 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.282 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.282 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:08.283 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:08.541 true 00:08:08.541 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:08.541 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.798 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.056 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:09.056 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:09.314 true 00:08:09.314 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:09.314 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.571 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.829 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:09.829 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:10.086 true 00:08:10.086 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:10.086 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.016 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.319 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:11.319 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:11.591 true 00:08:11.591 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:11.591 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.848 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.104 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:12.104 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:12.360 true 00:08:12.360 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:12.360 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.616 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.873 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:12.873 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:13.129 true 00:08:13.129 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:13.129 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.500 19:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.500 19:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:14.500 19:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:14.758 true 00:08:14.758 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:14.758 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.015 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.299 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:15.299 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:15.556 true 00:08:15.556 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:15.556 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.812 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.069 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:16.069 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:16.327 true 00:08:16.327 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:16.327 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.260 19:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.527 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:17.527 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:17.784 true 00:08:17.784 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:17.784 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.042 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.300 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:18.300 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:18.566 true 00:08:18.566 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:18.566 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.824 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.082 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:19.082 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:19.340 true 00:08:19.598 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:19.598 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.531 19:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.789 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:20.789 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:21.047 true 00:08:21.047 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:21.047 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.305 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.563 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:21.563 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:21.821 true 00:08:21.821 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:21.821 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.753 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.011 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:23.011 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:23.268 true 00:08:23.268 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:23.268 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.525 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.782 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:23.782 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:24.038 true 00:08:24.038 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:24.038 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.295 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.565 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:24.565 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:24.821 true 00:08:24.821 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:24.821 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.749 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.264 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:26.264 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:26.522 true 00:08:26.522 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:26.522 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.779 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.036 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:27.036 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:27.294 true 00:08:27.294 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:27.294 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.228 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.228 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:28.228 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:28.486 true 00:08:28.486 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:28.486 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.744 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.310 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:29.310 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:29.310 true 00:08:29.310 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:29.310 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.243 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.243 Initializing NVMe Controllers 00:08:30.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.243 Controller IO queue size 128, less than required. 00:08:30.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.243 Controller IO queue size 128, less than required. 00:08:30.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:30.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:30.243 Initialization complete. Launching workers. 00:08:30.243 ======================================================== 00:08:30.243 Latency(us) 00:08:30.243 Device Information : IOPS MiB/s Average min max 00:08:30.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 652.63 0.32 87517.85 3072.12 1012959.83 00:08:30.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8817.63 4.31 14516.56 3353.93 453712.96 00:08:30.243 ======================================================== 00:08:30.243 Total : 9470.27 4.62 19547.37 3072.12 1012959.83 00:08:30.243 00:08:30.500 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:30.500 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:30.758 true 00:08:30.758 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1007394 00:08:30.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1007394) - No such process 00:08:30.758 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1007394 00:08:30.758 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.016 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.273 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:31.273 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:31.273 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:31.273 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.273 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:31.530 null0 00:08:31.530 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.530 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.530 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:31.811 null1 00:08:31.812 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.812 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.812 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:32.069 null2 00:08:32.069 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.069 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.069 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:32.327 null3 00:08:32.327 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.327 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.327 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:32.584 null4 00:08:32.584 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.584 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.584 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:32.843 null5 00:08:32.843 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.843 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.843 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:33.100 null6 00:08:33.100 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.100 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.100 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:33.359 null7 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1011615 1011616 1011618 1011620 1011622 1011624 1011626 1011628 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.359 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.925 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.183 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.441 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.955 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.955 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.956 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.213 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.471 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.729 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.987 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.245 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.504 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.762 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.019 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.020 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.277 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.534 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.534 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.792 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.049 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.306 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.563 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.831 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.832 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.832 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.089 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.089 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.089 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.346 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.346 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.346 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.346 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.346 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.603 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.603 rmmod nvme_tcp 00:08:39.603 rmmod nvme_fabrics 00:08:39.603 rmmod nvme_keyring 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1006583 ']' 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1006583 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1006583 ']' 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1006583 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1006583 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1006583' 00:08:39.603 killing process with pid 1006583 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1006583 00:08:39.603 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1006583 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.861 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.399 00:08:42.399 real 0m48.023s 00:08:42.399 user 3m43.773s 00:08:42.399 sys 0m15.871s 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.399 ************************************ 00:08:42.399 END TEST nvmf_ns_hotplug_stress 00:08:42.399 ************************************ 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.399 ************************************ 00:08:42.399 START TEST nvmf_delete_subsystem 00:08:42.399 ************************************ 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.399 * Looking for test storage... 00:08:42.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.399 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.400 --rc genhtml_branch_coverage=1 00:08:42.400 --rc genhtml_function_coverage=1 00:08:42.400 --rc genhtml_legend=1 00:08:42.400 --rc geninfo_all_blocks=1 00:08:42.400 --rc geninfo_unexecuted_blocks=1 00:08:42.400 00:08:42.400 ' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.400 --rc genhtml_branch_coverage=1 00:08:42.400 --rc genhtml_function_coverage=1 00:08:42.400 --rc genhtml_legend=1 00:08:42.400 --rc geninfo_all_blocks=1 00:08:42.400 --rc geninfo_unexecuted_blocks=1 00:08:42.400 00:08:42.400 ' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.400 --rc genhtml_branch_coverage=1 00:08:42.400 --rc genhtml_function_coverage=1 00:08:42.400 --rc genhtml_legend=1 00:08:42.400 --rc geninfo_all_blocks=1 00:08:42.400 --rc geninfo_unexecuted_blocks=1 00:08:42.400 00:08:42.400 ' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.400 --rc genhtml_branch_coverage=1 00:08:42.400 --rc genhtml_function_coverage=1 00:08:42.400 --rc genhtml_legend=1 00:08:42.400 --rc geninfo_all_blocks=1 00:08:42.400 --rc geninfo_unexecuted_blocks=1 00:08:42.400 00:08:42.400 ' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.400 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.401 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.370 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.370 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:08:44.371 00:08:44.371 --- 10.0.0.2 ping statistics --- 00:08:44.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.371 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:44.371 00:08:44.371 --- 10.0.0.1 ping statistics --- 00:08:44.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.371 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1014514 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1014514 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1014514 ']' 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.371 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.631 [2024-12-06 19:06:54.982504] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:44.631 [2024-12-06 19:06:54.982587] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.631 [2024-12-06 19:06:55.053751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.631 [2024-12-06 19:06:55.112325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.631 [2024-12-06 19:06:55.112386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.631 [2024-12-06 19:06:55.112400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.631 [2024-12-06 19:06:55.112411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.631 [2024-12-06 19:06:55.112420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.631 [2024-12-06 19:06:55.113930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.631 [2024-12-06 19:06:55.113936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.890 [2024-12-06 19:06:55.261326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.890 [2024-12-06 19:06:55.277565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.890 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.890 NULL1 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.891 Delay0 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.891 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1014540 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:44.892 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:44.892 [2024-12-06 19:06:55.362343] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:46.790 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.790 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.790 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 [2024-12-06 19:06:57.485451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd7860 is same with the state(6) to be set 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Write completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 starting I/O failed: -6 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.048 Read completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 starting I/O failed: -6 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 [2024-12-06 19:06:57.486285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff84800d4b0 is same with the state(6) to be set 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Read completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.049 Write completed with error (sct=0, sc=8) 00:08:47.982 [2024-12-06 19:06:58.458156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd89b0 is same with the state(6) to be set 00:08:47.982 Read completed with error (sct=0, sc=8) 00:08:47.982 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 [2024-12-06 19:06:58.488029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff84800d020 is same with the state(6) to be set 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 [2024-12-06 19:06:58.488307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff84800d7e0 is same with the state(6) to be set 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 [2024-12-06 19:06:58.488477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd7680 is same with the state(6) to be set 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Write completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 Read completed with error (sct=0, sc=8) 00:08:47.983 [2024-12-06 19:06:58.489278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd72c0 is same with the state(6) to be set 00:08:47.983 Initializing NVMe Controllers 00:08:47.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:47.983 Controller IO queue size 128, less than required. 00:08:47.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:47.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:47.983 Initialization complete. Launching workers. 00:08:47.983 ======================================================== 00:08:47.983 Latency(us) 00:08:47.983 Device Information : IOPS MiB/s Average min max 00:08:47.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.72 0.08 912228.83 657.55 1013543.68 00:08:47.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.67 0.08 894894.99 358.84 1013613.57 00:08:47.983 ======================================================== 00:08:47.983 Total : 333.39 0.16 903407.15 358.84 1013613.57 00:08:47.983 00:08:47.983 [2024-12-06 19:06:58.489801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd89b0 (9): Bad file descriptor 00:08:47.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:47.983 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.983 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:47.983 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1014540 00:08:47.983 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1014540 00:08:48.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1014540) - No such process 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1014540 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1014540 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1014540 00:08:48.549 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:48.550 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.550 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.550 [2024-12-06 19:06:59.014015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1014953 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:48.550 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.550 [2024-12-06 19:06:59.086934] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:49.115 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.115 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:49.115 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:49.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.244 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.244 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:50.244 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.499 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.499 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:50.499 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.061 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.061 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:51.061 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.623 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.623 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:51.623 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.881 Initializing NVMe Controllers 00:08:51.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.881 Controller IO queue size 128, less than required. 00:08:51.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:51.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:51.881 Initialization complete. Launching workers. 00:08:51.881 ======================================================== 00:08:51.881 Latency(us) 00:08:51.881 Device Information : IOPS MiB/s Average min max 00:08:51.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004222.56 1000220.86 1013086.64 00:08:51.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004596.57 1000260.46 1012640.09 00:08:51.881 ======================================================== 00:08:51.881 Total : 256.00 0.12 1004409.56 1000220.86 1013086.64 00:08:51.881 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1014953 00:08:52.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1014953) - No such process 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1014953 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.139 rmmod nvme_tcp 00:08:52.139 rmmod nvme_fabrics 00:08:52.139 rmmod nvme_keyring 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1014514 ']' 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1014514 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1014514 ']' 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1014514 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014514 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014514' 00:08:52.139 killing process with pid 1014514 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1014514 00:08:52.139 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1014514 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.397 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.937 00:08:54.937 real 0m12.469s 00:08:54.937 user 0m27.853s 00:08:54.937 sys 0m3.089s 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.937 ************************************ 00:08:54.937 END TEST nvmf_delete_subsystem 00:08:54.937 ************************************ 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.937 ************************************ 00:08:54.937 START TEST nvmf_host_management 00:08:54.937 ************************************ 00:08:54.937 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:54.937 * Looking for test storage... 00:08:54.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:54.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.937 --rc genhtml_branch_coverage=1 00:08:54.937 --rc genhtml_function_coverage=1 00:08:54.937 --rc genhtml_legend=1 00:08:54.937 --rc geninfo_all_blocks=1 00:08:54.937 --rc geninfo_unexecuted_blocks=1 00:08:54.937 00:08:54.937 ' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:54.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.937 --rc genhtml_branch_coverage=1 00:08:54.937 --rc genhtml_function_coverage=1 00:08:54.937 --rc genhtml_legend=1 00:08:54.937 --rc geninfo_all_blocks=1 00:08:54.937 --rc geninfo_unexecuted_blocks=1 00:08:54.937 00:08:54.937 ' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:54.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.937 --rc genhtml_branch_coverage=1 00:08:54.937 --rc genhtml_function_coverage=1 00:08:54.937 --rc genhtml_legend=1 00:08:54.937 --rc geninfo_all_blocks=1 00:08:54.937 --rc geninfo_unexecuted_blocks=1 00:08:54.937 00:08:54.937 ' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:54.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.937 --rc genhtml_branch_coverage=1 00:08:54.937 --rc genhtml_function_coverage=1 00:08:54.937 --rc genhtml_legend=1 00:08:54.937 --rc geninfo_all_blocks=1 00:08:54.937 --rc geninfo_unexecuted_blocks=1 00:08:54.937 00:08:54.937 ' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.937 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.840 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.840 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.840 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:56.841 00:08:56.841 --- 10.0.0.2 ping statistics --- 00:08:56.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.841 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:08:56.841 00:08:56.841 --- 10.0.0.1 ping statistics --- 00:08:56.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.841 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:56.841 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1017423 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1017423 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1017423 ']' 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.099 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.099 [2024-12-06 19:07:07.490154] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:57.099 [2024-12-06 19:07:07.490222] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.099 [2024-12-06 19:07:07.557294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.099 [2024-12-06 19:07:07.612214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.099 [2024-12-06 19:07:07.612272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.099 [2024-12-06 19:07:07.612296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.099 [2024-12-06 19:07:07.612306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.099 [2024-12-06 19:07:07.612315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.099 [2024-12-06 19:07:07.613965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.099 [2024-12-06 19:07:07.614043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.099 [2024-12-06 19:07:07.614125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.099 [2024-12-06 19:07:07.614128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.358 [2024-12-06 19:07:07.759162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:57.358 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.359 Malloc0 00:08:57.359 [2024-12-06 19:07:07.827557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1017470 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1017470 /var/tmp/bdevperf.sock 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1017470 ']' 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.359 { 00:08:57.359 "params": { 00:08:57.359 "name": "Nvme$subsystem", 00:08:57.359 "trtype": "$TEST_TRANSPORT", 00:08:57.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.359 "adrfam": "ipv4", 00:08:57.359 "trsvcid": "$NVMF_PORT", 00:08:57.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.359 "hdgst": ${hdgst:-false}, 00:08:57.359 "ddgst": ${ddgst:-false} 00:08:57.359 }, 00:08:57.359 "method": "bdev_nvme_attach_controller" 00:08:57.359 } 00:08:57.359 EOF 00:08:57.359 )") 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:57.359 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.359 "params": { 00:08:57.359 "name": "Nvme0", 00:08:57.359 "trtype": "tcp", 00:08:57.359 "traddr": "10.0.0.2", 00:08:57.359 "adrfam": "ipv4", 00:08:57.359 "trsvcid": "4420", 00:08:57.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:57.359 "hdgst": false, 00:08:57.359 "ddgst": false 00:08:57.359 }, 00:08:57.359 "method": "bdev_nvme_attach_controller" 00:08:57.359 }' 00:08:57.359 [2024-12-06 19:07:07.910404] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:57.359 [2024-12-06 19:07:07.910490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017470 ] 00:08:57.618 [2024-12-06 19:07:07.980244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.618 [2024-12-06 19:07:08.039853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.876 Running I/O for 10 seconds... 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.876 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:57.877 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:57.877 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.136 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.136 [2024-12-06 19:07:08.662623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.136 [2024-12-06 19:07:08.662722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.136 [2024-12-06 19:07:08.662743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.136 [2024-12-06 19:07:08.662759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.136 [2024-12-06 19:07:08.662773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.136 [2024-12-06 19:07:08.662787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.136 [2024-12-06 19:07:08.662802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.137 [2024-12-06 19:07:08.662814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.662827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5660 is same with the state(6) to be set 00:08:58.137 [2024-12-06 19:07:08.662928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.662951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.662976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.662991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.663978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.663993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.137 [2024-12-06 19:07:08.664159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.137 [2024-12-06 19:07:08.664174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.664969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.138 [2024-12-06 19:07:08.664982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.138 [2024-12-06 19:07:08.666240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.138 task offset: 83968 on job bdev=Nvme0n1 fails 00:08:58.138 00:08:58.138 Latency(us) 00:08:58.138 [2024-12-06T18:07:08.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.138 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:58.138 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:58.138 Verification LBA range: start 0x0 length 0x400 00:08:58.138 Nvme0n1 : 0.40 1592.50 99.53 159.25 0.00 35474.92 2985.53 34369.99 00:08:58.138 [2024-12-06T18:07:08.715Z] =================================================================================================================== 00:08:58.138 [2024-12-06T18:07:08.715Z] Total : 1592.50 99.53 159.25 0.00 35474.92 2985.53 34369.99 00:08:58.138 [2024-12-06 19:07:08.668140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.138 [2024-12-06 19:07:08.668169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5660 (9): Bad file descriptor 00:08:58.138 [2024-12-06 19:07:08.672781] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.138 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1017470 00:08:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1017470) - No such process 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:59.510 { 00:08:59.510 "params": { 00:08:59.510 "name": "Nvme$subsystem", 00:08:59.510 "trtype": "$TEST_TRANSPORT", 00:08:59.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.510 "adrfam": "ipv4", 00:08:59.510 "trsvcid": "$NVMF_PORT", 00:08:59.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.510 "hdgst": ${hdgst:-false}, 00:08:59.510 "ddgst": ${ddgst:-false} 00:08:59.510 }, 00:08:59.510 "method": "bdev_nvme_attach_controller" 00:08:59.510 } 00:08:59.510 EOF 00:08:59.510 )") 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:59.510 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:59.510 "params": { 00:08:59.510 "name": "Nvme0", 00:08:59.510 "trtype": "tcp", 00:08:59.510 "traddr": "10.0.0.2", 00:08:59.510 "adrfam": "ipv4", 00:08:59.511 "trsvcid": "4420", 00:08:59.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:59.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:59.511 "hdgst": false, 00:08:59.511 "ddgst": false 00:08:59.511 }, 00:08:59.511 "method": "bdev_nvme_attach_controller" 00:08:59.511 }' 00:08:59.511 [2024-12-06 19:07:09.727784] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:59.511 [2024-12-06 19:07:09.727860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017745 ] 00:08:59.511 [2024-12-06 19:07:09.795006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.511 [2024-12-06 19:07:09.855428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.768 Running I/O for 1 seconds... 00:09:00.702 1664.00 IOPS, 104.00 MiB/s 00:09:00.702 Latency(us) 00:09:00.702 [2024-12-06T18:07:11.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.702 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:00.702 Verification LBA range: start 0x0 length 0x400 00:09:00.702 Nvme0n1 : 1.02 1702.02 106.38 0.00 0.00 36984.68 4684.61 33593.27 00:09:00.702 [2024-12-06T18:07:11.279Z] =================================================================================================================== 00:09:00.702 [2024-12-06T18:07:11.279Z] Total : 1702.02 106.38 0.00 0.00 36984.68 4684.61 33593.27 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.961 rmmod nvme_tcp 00:09:00.961 rmmod nvme_fabrics 00:09:00.961 rmmod nvme_keyring 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1017423 ']' 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1017423 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1017423 ']' 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1017423 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017423 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017423' 00:09:00.961 killing process with pid 1017423 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1017423 00:09:00.961 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1017423 00:09:01.221 [2024-12-06 19:07:11.639098] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.221 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.131 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:03.392 00:09:03.392 real 0m8.753s 00:09:03.392 user 0m19.293s 00:09:03.392 sys 0m2.655s 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.392 ************************************ 00:09:03.392 END TEST nvmf_host_management 00:09:03.392 ************************************ 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.392 ************************************ 00:09:03.392 START TEST nvmf_lvol 00:09:03.392 ************************************ 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:03.392 * Looking for test storage... 00:09:03.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:03.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.392 --rc genhtml_branch_coverage=1 00:09:03.392 --rc genhtml_function_coverage=1 00:09:03.392 --rc genhtml_legend=1 00:09:03.392 --rc geninfo_all_blocks=1 00:09:03.392 --rc geninfo_unexecuted_blocks=1 00:09:03.392 00:09:03.392 ' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:03.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.392 --rc genhtml_branch_coverage=1 00:09:03.392 --rc genhtml_function_coverage=1 00:09:03.392 --rc genhtml_legend=1 00:09:03.392 --rc geninfo_all_blocks=1 00:09:03.392 --rc geninfo_unexecuted_blocks=1 00:09:03.392 00:09:03.392 ' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:03.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.392 --rc genhtml_branch_coverage=1 00:09:03.392 --rc genhtml_function_coverage=1 00:09:03.392 --rc genhtml_legend=1 00:09:03.392 --rc geninfo_all_blocks=1 00:09:03.392 --rc geninfo_unexecuted_blocks=1 00:09:03.392 00:09:03.392 ' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:03.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.392 --rc genhtml_branch_coverage=1 00:09:03.392 --rc genhtml_function_coverage=1 00:09:03.392 --rc genhtml_legend=1 00:09:03.392 --rc geninfo_all_blocks=1 00:09:03.392 --rc geninfo_unexecuted_blocks=1 00:09:03.392 00:09:03.392 ' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.392 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.393 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.920 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:09:05.921 00:09:05.921 --- 10.0.0.2 ping statistics --- 00:09:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.921 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:09:05.921 00:09:05.921 --- 10.0.0.1 ping statistics --- 00:09:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.921 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1019851 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1019851 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1019851 ']' 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.921 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.921 [2024-12-06 19:07:16.243493] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:05.921 [2024-12-06 19:07:16.243570] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.921 [2024-12-06 19:07:16.313867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.921 [2024-12-06 19:07:16.372488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.921 [2024-12-06 19:07:16.372549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.921 [2024-12-06 19:07:16.372563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.921 [2024-12-06 19:07:16.372573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.921 [2024-12-06 19:07:16.372582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.921 [2024-12-06 19:07:16.377684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.921 [2024-12-06 19:07:16.377754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.921 [2024-12-06 19:07:16.377758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.179 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:06.436 [2024-12-06 19:07:16.772178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.436 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.694 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:06.694 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.952 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:06.952 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:07.210 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:07.468 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=69b8a392-6963-42ad-8e57-30c113a28c27 00:09:07.468 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69b8a392-6963-42ad-8e57-30c113a28c27 lvol 20 00:09:07.737 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=86781783-d0fe-49ea-a845-055cc8659e02 00:09:07.737 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:07.998 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86781783-d0fe-49ea-a845-055cc8659e02 00:09:08.256 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:08.515 [2024-12-06 19:07:19.009071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.515 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.773 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1020277 00:09:08.773 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:08.773 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:10.149 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 86781783-d0fe-49ea-a845-055cc8659e02 MY_SNAPSHOT 00:09:10.149 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9773b06b-36f3-41ef-9885-c4a036c0b741 00:09:10.149 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 86781783-d0fe-49ea-a845-055cc8659e02 30 00:09:10.408 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9773b06b-36f3-41ef-9885-c4a036c0b741 MY_CLONE 00:09:10.975 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cc8d6dcb-b656-41ba-9422-8e211eb70566 00:09:10.975 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cc8d6dcb-b656-41ba-9422-8e211eb70566 00:09:11.542 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1020277 00:09:19.650 Initializing NVMe Controllers 00:09:19.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:19.650 Controller IO queue size 128, less than required. 00:09:19.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:19.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:19.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:19.650 Initialization complete. Launching workers. 00:09:19.650 ======================================================== 00:09:19.650 Latency(us) 00:09:19.650 Device Information : IOPS MiB/s Average min max 00:09:19.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10337.00 40.38 12388.43 2053.39 141091.50 00:09:19.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10438.20 40.77 12265.39 2664.88 61529.44 00:09:19.650 ======================================================== 00:09:19.650 Total : 20775.20 81.15 12326.61 2053.39 141091.50 00:09:19.650 00:09:19.650 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:19.650 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86781783-d0fe-49ea-a845-055cc8659e02 00:09:19.650 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69b8a392-6963-42ad-8e57-30c113a28c27 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.908 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.908 rmmod nvme_tcp 00:09:19.908 rmmod nvme_fabrics 00:09:20.166 rmmod nvme_keyring 00:09:20.166 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.166 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:20.166 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1019851 ']' 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1019851 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1019851 ']' 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1019851 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019851 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019851' 00:09:20.167 killing process with pid 1019851 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1019851 00:09:20.167 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1019851 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.429 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.430 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.430 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.430 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.430 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.432 00:09:22.432 real 0m19.117s 00:09:22.432 user 1m4.554s 00:09:22.432 sys 0m5.780s 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 ************************************ 00:09:22.432 END TEST nvmf_lvol 00:09:22.432 ************************************ 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 ************************************ 00:09:22.432 START TEST nvmf_lvs_grow 00:09:22.432 ************************************ 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.432 * Looking for test storage... 00:09:22.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.432 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:22.691 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.692 --rc genhtml_branch_coverage=1 00:09:22.692 --rc genhtml_function_coverage=1 00:09:22.692 --rc genhtml_legend=1 00:09:22.692 --rc geninfo_all_blocks=1 00:09:22.692 --rc geninfo_unexecuted_blocks=1 00:09:22.692 00:09:22.692 ' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.692 --rc genhtml_branch_coverage=1 00:09:22.692 --rc genhtml_function_coverage=1 00:09:22.692 --rc genhtml_legend=1 00:09:22.692 --rc geninfo_all_blocks=1 00:09:22.692 --rc geninfo_unexecuted_blocks=1 00:09:22.692 00:09:22.692 ' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.692 --rc genhtml_branch_coverage=1 00:09:22.692 --rc genhtml_function_coverage=1 00:09:22.692 --rc genhtml_legend=1 00:09:22.692 --rc geninfo_all_blocks=1 00:09:22.692 --rc geninfo_unexecuted_blocks=1 00:09:22.692 00:09:22.692 ' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.692 --rc genhtml_branch_coverage=1 00:09:22.692 --rc genhtml_function_coverage=1 00:09:22.692 --rc genhtml_legend=1 00:09:22.692 --rc geninfo_all_blocks=1 00:09:22.692 --rc geninfo_unexecuted_blocks=1 00:09:22.692 00:09:22.692 ' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.692 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.224 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:09:25.225 00:09:25.225 --- 10.0.0.2 ping statistics --- 00:09:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.225 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:25.225 00:09:25.225 --- 10.0.0.1 ping statistics --- 00:09:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.225 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1023634 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1023634 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1023634 ']' 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.225 [2024-12-06 19:07:35.493497] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:25.225 [2024-12-06 19:07:35.493598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.225 [2024-12-06 19:07:35.564237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.225 [2024-12-06 19:07:35.617164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.225 [2024-12-06 19:07:35.617229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.225 [2024-12-06 19:07:35.617252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.225 [2024-12-06 19:07:35.617262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.225 [2024-12-06 19:07:35.617272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.225 [2024-12-06 19:07:35.617886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.225 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.483 [2024-12-06 19:07:35.999235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.483 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:25.483 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.483 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.483 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.483 ************************************ 00:09:25.483 START TEST lvs_grow_clean 00:09:25.483 ************************************ 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.484 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.050 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.050 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.308 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f8075f8d-5daf-487d-9f99-3326a1026924 00:09:26.308 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:26.308 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.566 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.566 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.566 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f8075f8d-5daf-487d-9f99-3326a1026924 lvol 150 00:09:26.824 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4d63d52c-4bc7-432a-8bce-0590908998e2 00:09:26.824 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.824 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.082 [2024-12-06 19:07:37.426085] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.082 [2024-12-06 19:07:37.426176] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.082 true 00:09:27.082 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:27.082 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.339 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.339 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.597 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d63d52c-4bc7-432a-8bce-0590908998e2 00:09:27.855 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.113 [2024-12-06 19:07:38.517355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.113 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1024022 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1024022 /var/tmp/bdevperf.sock 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1024022 ']' 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.370 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:28.370 [2024-12-06 19:07:38.849356] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:28.370 [2024-12-06 19:07:38.849426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024022 ] 00:09:28.370 [2024-12-06 19:07:38.915747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.628 [2024-12-06 19:07:38.973137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.628 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.628 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:28.628 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.192 Nvme0n1 00:09:29.192 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.450 [ 00:09:29.450 { 00:09:29.450 "name": "Nvme0n1", 00:09:29.450 "aliases": [ 00:09:29.450 "4d63d52c-4bc7-432a-8bce-0590908998e2" 00:09:29.450 ], 00:09:29.450 "product_name": "NVMe disk", 00:09:29.450 "block_size": 4096, 00:09:29.450 "num_blocks": 38912, 00:09:29.450 "uuid": "4d63d52c-4bc7-432a-8bce-0590908998e2", 00:09:29.450 "numa_id": 0, 00:09:29.450 "assigned_rate_limits": { 00:09:29.450 "rw_ios_per_sec": 0, 00:09:29.450 "rw_mbytes_per_sec": 0, 00:09:29.450 "r_mbytes_per_sec": 0, 00:09:29.450 "w_mbytes_per_sec": 0 00:09:29.450 }, 00:09:29.450 "claimed": false, 00:09:29.450 "zoned": false, 00:09:29.450 "supported_io_types": { 00:09:29.450 "read": true, 00:09:29.450 "write": true, 00:09:29.450 "unmap": true, 00:09:29.450 "flush": true, 00:09:29.450 "reset": true, 00:09:29.450 "nvme_admin": true, 00:09:29.450 "nvme_io": true, 00:09:29.450 "nvme_io_md": false, 00:09:29.450 "write_zeroes": true, 00:09:29.450 "zcopy": false, 00:09:29.450 "get_zone_info": false, 00:09:29.450 "zone_management": false, 00:09:29.450 "zone_append": false, 00:09:29.450 "compare": true, 00:09:29.450 "compare_and_write": true, 00:09:29.450 "abort": true, 00:09:29.450 "seek_hole": false, 00:09:29.450 "seek_data": false, 00:09:29.450 "copy": true, 00:09:29.450 "nvme_iov_md": false 00:09:29.450 }, 00:09:29.450 "memory_domains": [ 00:09:29.450 { 00:09:29.450 "dma_device_id": "system", 00:09:29.450 "dma_device_type": 1 00:09:29.450 } 00:09:29.450 ], 00:09:29.450 "driver_specific": { 00:09:29.450 "nvme": [ 00:09:29.450 { 00:09:29.450 "trid": { 00:09:29.450 "trtype": "TCP", 00:09:29.450 "adrfam": "IPv4", 00:09:29.450 "traddr": "10.0.0.2", 00:09:29.450 "trsvcid": "4420", 00:09:29.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.450 }, 00:09:29.450 "ctrlr_data": { 00:09:29.450 "cntlid": 1, 00:09:29.450 "vendor_id": "0x8086", 00:09:29.450 "model_number": "SPDK bdev Controller", 00:09:29.450 "serial_number": "SPDK0", 00:09:29.450 "firmware_revision": "25.01", 00:09:29.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.450 "oacs": { 00:09:29.450 "security": 0, 00:09:29.450 "format": 0, 00:09:29.450 "firmware": 0, 00:09:29.450 "ns_manage": 0 00:09:29.450 }, 00:09:29.450 "multi_ctrlr": true, 00:09:29.450 "ana_reporting": false 00:09:29.450 }, 00:09:29.450 "vs": { 00:09:29.450 "nvme_version": "1.3" 00:09:29.450 }, 00:09:29.450 "ns_data": { 00:09:29.450 "id": 1, 00:09:29.450 "can_share": true 00:09:29.450 } 00:09:29.450 } 00:09:29.450 ], 00:09:29.450 "mp_policy": "active_passive" 00:09:29.450 } 00:09:29.450 } 00:09:29.450 ] 00:09:29.450 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1024144 00:09:29.450 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.450 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.450 Running I/O for 10 seconds... 00:09:30.826 Latency(us) 00:09:30.826 [2024-12-06T18:07:41.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.826 Nvme0n1 : 1.00 14733.00 57.55 0.00 0.00 0.00 0.00 0.00 00:09:30.826 [2024-12-06T18:07:41.403Z] =================================================================================================================== 00:09:30.826 [2024-12-06T18:07:41.403Z] Total : 14733.00 57.55 0.00 0.00 0.00 0.00 0.00 00:09:30.826 00:09:31.392 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:31.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.649 Nvme0n1 : 2.00 14940.00 58.36 0.00 0.00 0.00 0.00 0.00 00:09:31.649 [2024-12-06T18:07:42.226Z] =================================================================================================================== 00:09:31.649 [2024-12-06T18:07:42.226Z] Total : 14940.00 58.36 0.00 0.00 0.00 0.00 0.00 00:09:31.649 00:09:31.649 true 00:09:31.649 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:31.649 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:31.908 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:31.908 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:31.908 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1024144 00:09:32.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.475 Nvme0n1 : 3.00 15061.33 58.83 0.00 0.00 0.00 0.00 0.00 00:09:32.475 [2024-12-06T18:07:43.052Z] =================================================================================================================== 00:09:32.475 [2024-12-06T18:07:43.052Z] Total : 15061.33 58.83 0.00 0.00 0.00 0.00 0.00 00:09:32.475 00:09:33.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.410 Nvme0n1 : 4.00 15157.75 59.21 0.00 0.00 0.00 0.00 0.00 00:09:33.410 [2024-12-06T18:07:43.987Z] =================================================================================================================== 00:09:33.410 [2024-12-06T18:07:43.987Z] Total : 15157.75 59.21 0.00 0.00 0.00 0.00 0.00 00:09:33.410 00:09:34.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.789 Nvme0n1 : 5.00 15225.00 59.47 0.00 0.00 0.00 0.00 0.00 00:09:34.789 [2024-12-06T18:07:45.366Z] =================================================================================================================== 00:09:34.789 [2024-12-06T18:07:45.366Z] Total : 15225.00 59.47 0.00 0.00 0.00 0.00 0.00 00:09:34.789 00:09:35.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.727 Nvme0n1 : 6.00 15301.67 59.77 0.00 0.00 0.00 0.00 0.00 00:09:35.727 [2024-12-06T18:07:46.304Z] =================================================================================================================== 00:09:35.727 [2024-12-06T18:07:46.304Z] Total : 15301.67 59.77 0.00 0.00 0.00 0.00 0.00 00:09:35.727 00:09:36.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.664 Nvme0n1 : 7.00 15347.57 59.95 0.00 0.00 0.00 0.00 0.00 00:09:36.664 [2024-12-06T18:07:47.241Z] =================================================================================================================== 00:09:36.664 [2024-12-06T18:07:47.241Z] Total : 15347.57 59.95 0.00 0.00 0.00 0.00 0.00 00:09:36.664 00:09:37.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.601 Nvme0n1 : 8.00 15394.12 60.13 0.00 0.00 0.00 0.00 0.00 00:09:37.601 [2024-12-06T18:07:48.178Z] =================================================================================================================== 00:09:37.601 [2024-12-06T18:07:48.178Z] Total : 15394.12 60.13 0.00 0.00 0.00 0.00 0.00 00:09:37.601 00:09:38.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.541 Nvme0n1 : 9.00 15426.78 60.26 0.00 0.00 0.00 0.00 0.00 00:09:38.541 [2024-12-06T18:07:49.118Z] =================================================================================================================== 00:09:38.541 [2024-12-06T18:07:49.118Z] Total : 15426.78 60.26 0.00 0.00 0.00 0.00 0.00 00:09:38.541 00:09:39.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.481 Nvme0n1 : 10.00 15465.20 60.41 0.00 0.00 0.00 0.00 0.00 00:09:39.481 [2024-12-06T18:07:50.058Z] =================================================================================================================== 00:09:39.481 [2024-12-06T18:07:50.058Z] Total : 15465.20 60.41 0.00 0.00 0.00 0.00 0.00 00:09:39.481 00:09:39.481 00:09:39.481 Latency(us) 00:09:39.481 [2024-12-06T18:07:50.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.481 Nvme0n1 : 10.01 15466.41 60.42 0.00 0.00 8271.02 4053.52 16117.00 00:09:39.481 [2024-12-06T18:07:50.058Z] =================================================================================================================== 00:09:39.481 [2024-12-06T18:07:50.058Z] Total : 15466.41 60.42 0.00 0.00 8271.02 4053.52 16117.00 00:09:39.481 { 00:09:39.481 "results": [ 00:09:39.481 { 00:09:39.481 "job": "Nvme0n1", 00:09:39.481 "core_mask": "0x2", 00:09:39.481 "workload": "randwrite", 00:09:39.481 "status": "finished", 00:09:39.481 "queue_depth": 128, 00:09:39.481 "io_size": 4096, 00:09:39.481 "runtime": 10.007492, 00:09:39.481 "iops": 15466.41256370727, 00:09:39.481 "mibps": 60.415674076981524, 00:09:39.481 "io_failed": 0, 00:09:39.481 "io_timeout": 0, 00:09:39.481 "avg_latency_us": 8271.018360377693, 00:09:39.481 "min_latency_us": 4053.522962962963, 00:09:39.481 "max_latency_us": 16117.001481481482 00:09:39.481 } 00:09:39.481 ], 00:09:39.481 "core_count": 1 00:09:39.481 } 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1024022 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1024022 ']' 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1024022 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.481 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024022 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024022' 00:09:39.740 killing process with pid 1024022 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1024022 00:09:39.740 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.740 00:09:39.740 Latency(us) 00:09:39.740 [2024-12-06T18:07:50.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.740 [2024-12-06T18:07:50.317Z] =================================================================================================================== 00:09:39.740 [2024-12-06T18:07:50.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1024022 00:09:39.740 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.998 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.256 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:40.256 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.516 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.516 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:40.516 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.775 [2024-12-06 19:07:51.343554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.033 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:41.291 request: 00:09:41.291 { 00:09:41.291 "uuid": "f8075f8d-5daf-487d-9f99-3326a1026924", 00:09:41.291 "method": "bdev_lvol_get_lvstores", 00:09:41.291 "req_id": 1 00:09:41.291 } 00:09:41.291 Got JSON-RPC error response 00:09:41.291 response: 00:09:41.291 { 00:09:41.291 "code": -19, 00:09:41.291 "message": "No such device" 00:09:41.291 } 00:09:41.291 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:41.291 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.291 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.291 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.291 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.549 aio_bdev 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4d63d52c-4bc7-432a-8bce-0590908998e2 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4d63d52c-4bc7-432a-8bce-0590908998e2 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.549 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.808 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4d63d52c-4bc7-432a-8bce-0590908998e2 -t 2000 00:09:42.067 [ 00:09:42.067 { 00:09:42.067 "name": "4d63d52c-4bc7-432a-8bce-0590908998e2", 00:09:42.067 "aliases": [ 00:09:42.067 "lvs/lvol" 00:09:42.067 ], 00:09:42.067 "product_name": "Logical Volume", 00:09:42.067 "block_size": 4096, 00:09:42.067 "num_blocks": 38912, 00:09:42.067 "uuid": "4d63d52c-4bc7-432a-8bce-0590908998e2", 00:09:42.067 "assigned_rate_limits": { 00:09:42.067 "rw_ios_per_sec": 0, 00:09:42.067 "rw_mbytes_per_sec": 0, 00:09:42.067 "r_mbytes_per_sec": 0, 00:09:42.067 "w_mbytes_per_sec": 0 00:09:42.067 }, 00:09:42.067 "claimed": false, 00:09:42.067 "zoned": false, 00:09:42.067 "supported_io_types": { 00:09:42.067 "read": true, 00:09:42.067 "write": true, 00:09:42.067 "unmap": true, 00:09:42.067 "flush": false, 00:09:42.067 "reset": true, 00:09:42.067 "nvme_admin": false, 00:09:42.067 "nvme_io": false, 00:09:42.067 "nvme_io_md": false, 00:09:42.067 "write_zeroes": true, 00:09:42.067 "zcopy": false, 00:09:42.067 "get_zone_info": false, 00:09:42.067 "zone_management": false, 00:09:42.067 "zone_append": false, 00:09:42.067 "compare": false, 00:09:42.067 "compare_and_write": false, 00:09:42.067 "abort": false, 00:09:42.067 "seek_hole": true, 00:09:42.067 "seek_data": true, 00:09:42.067 "copy": false, 00:09:42.067 "nvme_iov_md": false 00:09:42.067 }, 00:09:42.067 "driver_specific": { 00:09:42.067 "lvol": { 00:09:42.067 "lvol_store_uuid": "f8075f8d-5daf-487d-9f99-3326a1026924", 00:09:42.067 "base_bdev": "aio_bdev", 00:09:42.067 "thin_provision": false, 00:09:42.067 "num_allocated_clusters": 38, 00:09:42.067 "snapshot": false, 00:09:42.067 "clone": false, 00:09:42.067 "esnap_clone": false 00:09:42.067 } 00:09:42.067 } 00:09:42.067 } 00:09:42.067 ] 00:09:42.067 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:42.067 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:42.067 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:42.327 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:42.327 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:42.327 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.587 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.587 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4d63d52c-4bc7-432a-8bce-0590908998e2 00:09:42.846 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8075f8d-5daf-487d-9f99-3326a1026924 00:09:43.104 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.363 00:09:43.363 real 0m17.821s 00:09:43.363 user 0m17.342s 00:09:43.363 sys 0m1.854s 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:43.363 ************************************ 00:09:43.363 END TEST lvs_grow_clean 00:09:43.363 ************************************ 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.363 ************************************ 00:09:43.363 START TEST lvs_grow_dirty 00:09:43.363 ************************************ 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.363 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.929 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:43.929 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:43.929 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:43.929 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:43.929 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.187 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.187 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.187 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 lvol 150 00:09:44.754 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=32938224-0056-4211-b82b-2680cf8b1a7e 00:09:44.754 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:44.754 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:44.754 [2024-12-06 19:07:55.281103] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:44.754 [2024-12-06 19:07:55.281200] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:44.754 true 00:09:44.754 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:44.754 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.019 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.019 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.277 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32938224-0056-4211-b82b-2680cf8b1a7e 00:09:45.841 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:45.841 [2024-12-06 19:07:56.356253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.841 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.097 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1026195 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1026195 /var/tmp/bdevperf.sock 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1026195 ']' 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.098 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.354 [2024-12-06 19:07:56.683553] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:46.355 [2024-12-06 19:07:56.683624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026195 ] 00:09:46.355 [2024-12-06 19:07:56.747899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.355 [2024-12-06 19:07:56.804773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.355 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.355 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:46.355 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:46.919 Nvme0n1 00:09:46.919 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:47.177 [ 00:09:47.177 { 00:09:47.177 "name": "Nvme0n1", 00:09:47.177 "aliases": [ 00:09:47.177 "32938224-0056-4211-b82b-2680cf8b1a7e" 00:09:47.177 ], 00:09:47.177 "product_name": "NVMe disk", 00:09:47.177 "block_size": 4096, 00:09:47.177 "num_blocks": 38912, 00:09:47.177 "uuid": "32938224-0056-4211-b82b-2680cf8b1a7e", 00:09:47.177 "numa_id": 0, 00:09:47.177 "assigned_rate_limits": { 00:09:47.177 "rw_ios_per_sec": 0, 00:09:47.177 "rw_mbytes_per_sec": 0, 00:09:47.177 "r_mbytes_per_sec": 0, 00:09:47.177 "w_mbytes_per_sec": 0 00:09:47.177 }, 00:09:47.177 "claimed": false, 00:09:47.177 "zoned": false, 00:09:47.177 "supported_io_types": { 00:09:47.177 "read": true, 00:09:47.177 "write": true, 00:09:47.177 "unmap": true, 00:09:47.177 "flush": true, 00:09:47.177 "reset": true, 00:09:47.177 "nvme_admin": true, 00:09:47.177 "nvme_io": true, 00:09:47.177 "nvme_io_md": false, 00:09:47.177 "write_zeroes": true, 00:09:47.177 "zcopy": false, 00:09:47.177 "get_zone_info": false, 00:09:47.177 "zone_management": false, 00:09:47.177 "zone_append": false, 00:09:47.177 "compare": true, 00:09:47.177 "compare_and_write": true, 00:09:47.177 "abort": true, 00:09:47.177 "seek_hole": false, 00:09:47.177 "seek_data": false, 00:09:47.177 "copy": true, 00:09:47.177 "nvme_iov_md": false 00:09:47.177 }, 00:09:47.177 "memory_domains": [ 00:09:47.177 { 00:09:47.177 "dma_device_id": "system", 00:09:47.177 "dma_device_type": 1 00:09:47.177 } 00:09:47.177 ], 00:09:47.177 "driver_specific": { 00:09:47.177 "nvme": [ 00:09:47.177 { 00:09:47.177 "trid": { 00:09:47.177 "trtype": "TCP", 00:09:47.177 "adrfam": "IPv4", 00:09:47.177 "traddr": "10.0.0.2", 00:09:47.177 "trsvcid": "4420", 00:09:47.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:47.177 }, 00:09:47.177 "ctrlr_data": { 00:09:47.177 "cntlid": 1, 00:09:47.177 "vendor_id": "0x8086", 00:09:47.177 "model_number": "SPDK bdev Controller", 00:09:47.177 "serial_number": "SPDK0", 00:09:47.177 "firmware_revision": "25.01", 00:09:47.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.177 "oacs": { 00:09:47.177 "security": 0, 00:09:47.177 "format": 0, 00:09:47.177 "firmware": 0, 00:09:47.177 "ns_manage": 0 00:09:47.177 }, 00:09:47.177 "multi_ctrlr": true, 00:09:47.177 "ana_reporting": false 00:09:47.177 }, 00:09:47.177 "vs": { 00:09:47.177 "nvme_version": "1.3" 00:09:47.177 }, 00:09:47.177 "ns_data": { 00:09:47.177 "id": 1, 00:09:47.177 "can_share": true 00:09:47.177 } 00:09:47.177 } 00:09:47.177 ], 00:09:47.177 "mp_policy": "active_passive" 00:09:47.177 } 00:09:47.177 } 00:09:47.177 ] 00:09:47.177 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1026331 00:09:47.177 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:47.177 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.435 Running I/O for 10 seconds... 00:09:48.368 Latency(us) 00:09:48.368 [2024-12-06T18:07:58.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.368 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:09:48.368 [2024-12-06T18:07:58.945Z] =================================================================================================================== 00:09:48.368 [2024-12-06T18:07:58.945Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:09:48.368 00:09:49.302 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:49.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.302 Nvme0n1 : 2.00 15018.50 58.67 0.00 0.00 0.00 0.00 0.00 00:09:49.302 [2024-12-06T18:07:59.879Z] =================================================================================================================== 00:09:49.302 [2024-12-06T18:07:59.879Z] Total : 15018.50 58.67 0.00 0.00 0.00 0.00 0.00 00:09:49.302 00:09:49.591 true 00:09:49.591 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:49.591 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:49.850 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:49.850 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:49.850 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1026331 00:09:50.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.418 Nvme0n1 : 3.00 15008.33 58.63 0.00 0.00 0.00 0.00 0.00 00:09:50.418 [2024-12-06T18:08:00.995Z] =================================================================================================================== 00:09:50.418 [2024-12-06T18:08:00.995Z] Total : 15008.33 58.63 0.00 0.00 0.00 0.00 0.00 00:09:50.418 00:09:51.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.351 Nvme0n1 : 4.00 15129.75 59.10 0.00 0.00 0.00 0.00 0.00 00:09:51.351 [2024-12-06T18:08:01.928Z] =================================================================================================================== 00:09:51.351 [2024-12-06T18:08:01.928Z] Total : 15129.75 59.10 0.00 0.00 0.00 0.00 0.00 00:09:51.351 00:09:52.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.284 Nvme0n1 : 5.00 15215.40 59.44 0.00 0.00 0.00 0.00 0.00 00:09:52.284 [2024-12-06T18:08:02.861Z] =================================================================================================================== 00:09:52.284 [2024-12-06T18:08:02.861Z] Total : 15215.40 59.44 0.00 0.00 0.00 0.00 0.00 00:09:52.284 00:09:53.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.266 Nvme0n1 : 6.00 15272.50 59.66 0.00 0.00 0.00 0.00 0.00 00:09:53.266 [2024-12-06T18:08:03.843Z] =================================================================================================================== 00:09:53.266 [2024-12-06T18:08:03.843Z] Total : 15272.50 59.66 0.00 0.00 0.00 0.00 0.00 00:09:53.266 00:09:54.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.672 Nvme0n1 : 7.00 15322.71 59.85 0.00 0.00 0.00 0.00 0.00 00:09:54.672 [2024-12-06T18:08:05.249Z] =================================================================================================================== 00:09:54.672 [2024-12-06T18:08:05.249Z] Total : 15322.71 59.85 0.00 0.00 0.00 0.00 0.00 00:09:54.672 00:09:55.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.237 Nvme0n1 : 8.00 15361.12 60.00 0.00 0.00 0.00 0.00 0.00 00:09:55.237 [2024-12-06T18:08:05.814Z] =================================================================================================================== 00:09:55.237 [2024-12-06T18:08:05.814Z] Total : 15361.12 60.00 0.00 0.00 0.00 0.00 0.00 00:09:55.237 00:09:56.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.612 Nvme0n1 : 9.00 15404.67 60.17 0.00 0.00 0.00 0.00 0.00 00:09:56.612 [2024-12-06T18:08:07.189Z] =================================================================================================================== 00:09:56.612 [2024-12-06T18:08:07.189Z] Total : 15404.67 60.17 0.00 0.00 0.00 0.00 0.00 00:09:56.612 00:09:57.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.546 Nvme0n1 : 10.00 15445.30 60.33 0.00 0.00 0.00 0.00 0.00 00:09:57.546 [2024-12-06T18:08:08.123Z] =================================================================================================================== 00:09:57.546 [2024-12-06T18:08:08.123Z] Total : 15445.30 60.33 0.00 0.00 0.00 0.00 0.00 00:09:57.546 00:09:57.546 00:09:57.546 Latency(us) 00:09:57.546 [2024-12-06T18:08:08.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.546 Nvme0n1 : 10.01 15444.07 60.33 0.00 0.00 8283.35 2354.44 20097.71 00:09:57.546 [2024-12-06T18:08:08.123Z] =================================================================================================================== 00:09:57.546 [2024-12-06T18:08:08.123Z] Total : 15444.07 60.33 0.00 0.00 8283.35 2354.44 20097.71 00:09:57.546 { 00:09:57.546 "results": [ 00:09:57.546 { 00:09:57.546 "job": "Nvme0n1", 00:09:57.546 "core_mask": "0x2", 00:09:57.546 "workload": "randwrite", 00:09:57.546 "status": "finished", 00:09:57.546 "queue_depth": 128, 00:09:57.546 "io_size": 4096, 00:09:57.546 "runtime": 10.009084, 00:09:57.546 "iops": 15444.070606261272, 00:09:57.546 "mibps": 60.32840080570809, 00:09:57.546 "io_failed": 0, 00:09:57.546 "io_timeout": 0, 00:09:57.546 "avg_latency_us": 8283.34878838782, 00:09:57.546 "min_latency_us": 2354.4414814814813, 00:09:57.546 "max_latency_us": 20097.706666666665 00:09:57.546 } 00:09:57.546 ], 00:09:57.546 "core_count": 1 00:09:57.546 } 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1026195 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1026195 ']' 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1026195 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026195 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026195' 00:09:57.546 killing process with pid 1026195 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1026195 00:09:57.546 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.546 00:09:57.546 Latency(us) 00:09:57.546 [2024-12-06T18:08:08.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.546 [2024-12-06T18:08:08.123Z] =================================================================================================================== 00:09:57.546 [2024-12-06T18:08:08.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.546 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1026195 00:09:57.546 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.804 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1023634 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1023634 00:09:58.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1023634 Killed "${NVMF_APP[@]}" "$@" 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1027672 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1027672 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1027672 ']' 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.414 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.672 [2024-12-06 19:08:09.009994] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:58.672 [2024-12-06 19:08:09.010079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.672 [2024-12-06 19:08:09.081957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.672 [2024-12-06 19:08:09.140811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.672 [2024-12-06 19:08:09.140873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.672 [2024-12-06 19:08:09.140886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.672 [2024-12-06 19:08:09.140897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.672 [2024-12-06 19:08:09.140907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.672 [2024-12-06 19:08:09.141477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.929 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.187 [2024-12-06 19:08:09.555497] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:59.187 [2024-12-06 19:08:09.555645] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:59.187 [2024-12-06 19:08:09.555726] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 32938224-0056-4211-b82b-2680cf8b1a7e 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=32938224-0056-4211-b82b-2680cf8b1a7e 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.187 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:59.443 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32938224-0056-4211-b82b-2680cf8b1a7e -t 2000 00:09:59.700 [ 00:09:59.700 { 00:09:59.700 "name": "32938224-0056-4211-b82b-2680cf8b1a7e", 00:09:59.700 "aliases": [ 00:09:59.700 "lvs/lvol" 00:09:59.700 ], 00:09:59.700 "product_name": "Logical Volume", 00:09:59.700 "block_size": 4096, 00:09:59.700 "num_blocks": 38912, 00:09:59.700 "uuid": "32938224-0056-4211-b82b-2680cf8b1a7e", 00:09:59.700 "assigned_rate_limits": { 00:09:59.700 "rw_ios_per_sec": 0, 00:09:59.700 "rw_mbytes_per_sec": 0, 00:09:59.700 "r_mbytes_per_sec": 0, 00:09:59.700 "w_mbytes_per_sec": 0 00:09:59.700 }, 00:09:59.700 "claimed": false, 00:09:59.700 "zoned": false, 00:09:59.700 "supported_io_types": { 00:09:59.700 "read": true, 00:09:59.700 "write": true, 00:09:59.700 "unmap": true, 00:09:59.700 "flush": false, 00:09:59.700 "reset": true, 00:09:59.700 "nvme_admin": false, 00:09:59.700 "nvme_io": false, 00:09:59.700 "nvme_io_md": false, 00:09:59.700 "write_zeroes": true, 00:09:59.700 "zcopy": false, 00:09:59.700 "get_zone_info": false, 00:09:59.700 "zone_management": false, 00:09:59.700 "zone_append": false, 00:09:59.700 "compare": false, 00:09:59.700 "compare_and_write": false, 00:09:59.700 "abort": false, 00:09:59.700 "seek_hole": true, 00:09:59.700 "seek_data": true, 00:09:59.700 "copy": false, 00:09:59.700 "nvme_iov_md": false 00:09:59.700 }, 00:09:59.700 "driver_specific": { 00:09:59.700 "lvol": { 00:09:59.700 "lvol_store_uuid": "b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0", 00:09:59.700 "base_bdev": "aio_bdev", 00:09:59.700 "thin_provision": false, 00:09:59.700 "num_allocated_clusters": 38, 00:09:59.700 "snapshot": false, 00:09:59.700 "clone": false, 00:09:59.700 "esnap_clone": false 00:09:59.700 } 00:09:59.700 } 00:09:59.700 } 00:09:59.700 ] 00:09:59.700 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:59.700 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:59.700 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:59.957 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:59.957 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:09:59.957 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:00.214 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:00.214 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:00.471 [2024-12-06 19:08:10.969246] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.471 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.471 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.471 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.471 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.471 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:00.471 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:00.729 request: 00:10:00.729 { 00:10:00.729 "uuid": "b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0", 00:10:00.729 "method": "bdev_lvol_get_lvstores", 00:10:00.729 "req_id": 1 00:10:00.729 } 00:10:00.729 Got JSON-RPC error response 00:10:00.729 response: 00:10:00.729 { 00:10:00.729 "code": -19, 00:10:00.729 "message": "No such device" 00:10:00.729 } 00:10:00.729 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:00.729 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.729 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.729 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.729 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.987 aio_bdev 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32938224-0056-4211-b82b-2680cf8b1a7e 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=32938224-0056-4211-b82b-2680cf8b1a7e 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.987 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.246 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32938224-0056-4211-b82b-2680cf8b1a7e -t 2000 00:10:01.503 [ 00:10:01.504 { 00:10:01.504 "name": "32938224-0056-4211-b82b-2680cf8b1a7e", 00:10:01.504 "aliases": [ 00:10:01.504 "lvs/lvol" 00:10:01.504 ], 00:10:01.504 "product_name": "Logical Volume", 00:10:01.504 "block_size": 4096, 00:10:01.504 "num_blocks": 38912, 00:10:01.504 "uuid": "32938224-0056-4211-b82b-2680cf8b1a7e", 00:10:01.504 "assigned_rate_limits": { 00:10:01.504 "rw_ios_per_sec": 0, 00:10:01.504 "rw_mbytes_per_sec": 0, 00:10:01.504 "r_mbytes_per_sec": 0, 00:10:01.504 "w_mbytes_per_sec": 0 00:10:01.504 }, 00:10:01.504 "claimed": false, 00:10:01.504 "zoned": false, 00:10:01.504 "supported_io_types": { 00:10:01.504 "read": true, 00:10:01.504 "write": true, 00:10:01.504 "unmap": true, 00:10:01.504 "flush": false, 00:10:01.504 "reset": true, 00:10:01.504 "nvme_admin": false, 00:10:01.504 "nvme_io": false, 00:10:01.504 "nvme_io_md": false, 00:10:01.504 "write_zeroes": true, 00:10:01.504 "zcopy": false, 00:10:01.504 "get_zone_info": false, 00:10:01.504 "zone_management": false, 00:10:01.504 "zone_append": false, 00:10:01.504 "compare": false, 00:10:01.504 "compare_and_write": false, 00:10:01.504 "abort": false, 00:10:01.504 "seek_hole": true, 00:10:01.504 "seek_data": true, 00:10:01.504 "copy": false, 00:10:01.504 "nvme_iov_md": false 00:10:01.504 }, 00:10:01.504 "driver_specific": { 00:10:01.504 "lvol": { 00:10:01.504 "lvol_store_uuid": "b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0", 00:10:01.504 "base_bdev": "aio_bdev", 00:10:01.504 "thin_provision": false, 00:10:01.504 "num_allocated_clusters": 38, 00:10:01.504 "snapshot": false, 00:10:01.504 "clone": false, 00:10:01.504 "esnap_clone": false 00:10:01.504 } 00:10:01.504 } 00:10:01.504 } 00:10:01.504 ] 00:10:01.761 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:01.761 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:01.761 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:02.019 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:02.019 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:02.019 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:02.277 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:02.277 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32938224-0056-4211-b82b-2680cf8b1a7e 00:10:02.535 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2ffd6e0-fdad-4ce5-ae7b-264d7fb21af0 00:10:02.793 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:03.051 00:10:03.051 real 0m19.546s 00:10:03.051 user 0m49.563s 00:10:03.051 sys 0m4.489s 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.051 ************************************ 00:10:03.051 END TEST lvs_grow_dirty 00:10:03.051 ************************************ 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:03.051 nvmf_trace.0 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:03.051 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.052 rmmod nvme_tcp 00:10:03.052 rmmod nvme_fabrics 00:10:03.052 rmmod nvme_keyring 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1027672 ']' 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1027672 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1027672 ']' 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1027672 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1027672 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1027672' 00:10:03.052 killing process with pid 1027672 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1027672 00:10:03.052 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1027672 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.310 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.854 00:10:05.854 real 0m42.973s 00:10:05.854 user 1m13.039s 00:10:05.854 sys 0m8.422s 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.854 ************************************ 00:10:05.854 END TEST nvmf_lvs_grow 00:10:05.854 ************************************ 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.854 ************************************ 00:10:05.854 START TEST nvmf_bdev_io_wait 00:10:05.854 ************************************ 00:10:05.854 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:05.854 * Looking for test storage... 00:10:05.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.854 --rc genhtml_branch_coverage=1 00:10:05.854 --rc genhtml_function_coverage=1 00:10:05.854 --rc genhtml_legend=1 00:10:05.854 --rc geninfo_all_blocks=1 00:10:05.854 --rc geninfo_unexecuted_blocks=1 00:10:05.854 00:10:05.854 ' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.854 --rc genhtml_branch_coverage=1 00:10:05.854 --rc genhtml_function_coverage=1 00:10:05.854 --rc genhtml_legend=1 00:10:05.854 --rc geninfo_all_blocks=1 00:10:05.854 --rc geninfo_unexecuted_blocks=1 00:10:05.854 00:10:05.854 ' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.854 --rc genhtml_branch_coverage=1 00:10:05.854 --rc genhtml_function_coverage=1 00:10:05.854 --rc genhtml_legend=1 00:10:05.854 --rc geninfo_all_blocks=1 00:10:05.854 --rc geninfo_unexecuted_blocks=1 00:10:05.854 00:10:05.854 ' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.854 --rc genhtml_branch_coverage=1 00:10:05.854 --rc genhtml_function_coverage=1 00:10:05.854 --rc genhtml_legend=1 00:10:05.854 --rc geninfo_all_blocks=1 00:10:05.854 --rc geninfo_unexecuted_blocks=1 00:10:05.854 00:10:05.854 ' 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:05.854 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.855 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.759 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.760 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:10:08.019 00:10:08.019 --- 10.0.0.2 ping statistics --- 00:10:08.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.019 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:10:08.019 00:10:08.019 --- 10.0.0.1 ping statistics --- 00:10:08.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.019 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1030265 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1030265 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1030265 ']' 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.019 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.019 [2024-12-06 19:08:18.501530] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:08.019 [2024-12-06 19:08:18.501614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.019 [2024-12-06 19:08:18.575445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.278 [2024-12-06 19:08:18.633642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.278 [2024-12-06 19:08:18.633719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.278 [2024-12-06 19:08:18.633743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.278 [2024-12-06 19:08:18.633754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.278 [2024-12-06 19:08:18.633763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.278 [2024-12-06 19:08:18.635373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.278 [2024-12-06 19:08:18.635441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.278 [2024-12-06 19:08:18.635506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.278 [2024-12-06 19:08:18.635509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.278 [2024-12-06 19:08:18.841692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.278 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 Malloc0 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 [2024-12-06 19:08:18.892546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1030356 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1030358 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.537 { 00:10:08.537 "params": { 00:10:08.537 "name": "Nvme$subsystem", 00:10:08.537 "trtype": "$TEST_TRANSPORT", 00:10:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.537 "adrfam": "ipv4", 00:10:08.537 "trsvcid": "$NVMF_PORT", 00:10:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.537 "hdgst": ${hdgst:-false}, 00:10:08.537 "ddgst": ${ddgst:-false} 00:10:08.537 }, 00:10:08.537 "method": "bdev_nvme_attach_controller" 00:10:08.537 } 00:10:08.537 EOF 00:10:08.537 )") 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1030360 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.537 { 00:10:08.537 "params": { 00:10:08.537 "name": "Nvme$subsystem", 00:10:08.537 "trtype": "$TEST_TRANSPORT", 00:10:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.537 "adrfam": "ipv4", 00:10:08.537 "trsvcid": "$NVMF_PORT", 00:10:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.537 "hdgst": ${hdgst:-false}, 00:10:08.537 "ddgst": ${ddgst:-false} 00:10:08.537 }, 00:10:08.537 "method": "bdev_nvme_attach_controller" 00:10:08.537 } 00:10:08.537 EOF 00:10:08.537 )") 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1030363 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.537 { 00:10:08.537 "params": { 00:10:08.537 "name": "Nvme$subsystem", 00:10:08.537 "trtype": "$TEST_TRANSPORT", 00:10:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.537 "adrfam": "ipv4", 00:10:08.537 "trsvcid": "$NVMF_PORT", 00:10:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.537 "hdgst": ${hdgst:-false}, 00:10:08.537 "ddgst": ${ddgst:-false} 00:10:08.537 }, 00:10:08.537 "method": "bdev_nvme_attach_controller" 00:10:08.537 } 00:10:08.537 EOF 00:10:08.537 )") 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.537 { 00:10:08.537 "params": { 00:10:08.537 "name": "Nvme$subsystem", 00:10:08.537 "trtype": "$TEST_TRANSPORT", 00:10:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.537 "adrfam": "ipv4", 00:10:08.537 "trsvcid": "$NVMF_PORT", 00:10:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.537 "hdgst": ${hdgst:-false}, 00:10:08.537 "ddgst": ${ddgst:-false} 00:10:08.537 }, 00:10:08.537 "method": "bdev_nvme_attach_controller" 00:10:08.537 } 00:10:08.537 EOF 00:10:08.537 )") 00:10:08.537 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1030356 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.538 "params": { 00:10:08.538 "name": "Nvme1", 00:10:08.538 "trtype": "tcp", 00:10:08.538 "traddr": "10.0.0.2", 00:10:08.538 "adrfam": "ipv4", 00:10:08.538 "trsvcid": "4420", 00:10:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.538 "hdgst": false, 00:10:08.538 "ddgst": false 00:10:08.538 }, 00:10:08.538 "method": "bdev_nvme_attach_controller" 00:10:08.538 }' 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.538 "params": { 00:10:08.538 "name": "Nvme1", 00:10:08.538 "trtype": "tcp", 00:10:08.538 "traddr": "10.0.0.2", 00:10:08.538 "adrfam": "ipv4", 00:10:08.538 "trsvcid": "4420", 00:10:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.538 "hdgst": false, 00:10:08.538 "ddgst": false 00:10:08.538 }, 00:10:08.538 "method": "bdev_nvme_attach_controller" 00:10:08.538 }' 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.538 "params": { 00:10:08.538 "name": "Nvme1", 00:10:08.538 "trtype": "tcp", 00:10:08.538 "traddr": "10.0.0.2", 00:10:08.538 "adrfam": "ipv4", 00:10:08.538 "trsvcid": "4420", 00:10:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.538 "hdgst": false, 00:10:08.538 "ddgst": false 00:10:08.538 }, 00:10:08.538 "method": "bdev_nvme_attach_controller" 00:10:08.538 }' 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.538 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.538 "params": { 00:10:08.538 "name": "Nvme1", 00:10:08.538 "trtype": "tcp", 00:10:08.538 "traddr": "10.0.0.2", 00:10:08.538 "adrfam": "ipv4", 00:10:08.538 "trsvcid": "4420", 00:10:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.538 "hdgst": false, 00:10:08.538 "ddgst": false 00:10:08.538 }, 00:10:08.538 "method": "bdev_nvme_attach_controller" 00:10:08.538 }' 00:10:08.538 [2024-12-06 19:08:18.941361] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:08.538 [2024-12-06 19:08:18.941360] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:08.538 [2024-12-06 19:08:18.941456] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:08:18.941456] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:08.538 --proc-type=auto ] 00:10:08.538 [2024-12-06 19:08:18.942718] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:08.538 [2024-12-06 19:08:18.942718] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:08.538 [2024-12-06 19:08:18.942798] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:08:18.942798] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:08.538 --proc-type=auto ] 00:10:08.796 [2024-12-06 19:08:19.124461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.796 [2024-12-06 19:08:19.177755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:08.796 [2024-12-06 19:08:19.223209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.796 [2024-12-06 19:08:19.279634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:08.796 [2024-12-06 19:08:19.298861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.796 [2024-12-06 19:08:19.349015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:09.055 [2024-12-06 19:08:19.374206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.055 [2024-12-06 19:08:19.424957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:09.055 Running I/O for 1 seconds... 00:10:09.055 Running I/O for 1 seconds... 00:10:09.055 Running I/O for 1 seconds... 00:10:09.314 Running I/O for 1 seconds... 00:10:10.249 7703.00 IOPS, 30.09 MiB/s [2024-12-06T18:08:20.826Z] 185544.00 IOPS, 724.78 MiB/s 00:10:10.249 Latency(us) 00:10:10.249 [2024-12-06T18:08:20.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.249 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:10.249 Nvme1n1 : 1.00 185194.39 723.42 0.00 0.00 687.32 295.82 1881.13 00:10:10.249 [2024-12-06T18:08:20.826Z] =================================================================================================================== 00:10:10.249 [2024-12-06T18:08:20.826Z] Total : 185194.39 723.42 0.00 0.00 687.32 295.82 1881.13 00:10:10.249 00:10:10.249 Latency(us) 00:10:10.249 [2024-12-06T18:08:20.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.249 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:10.249 Nvme1n1 : 1.01 7746.34 30.26 0.00 0.00 16430.40 8252.68 18641.35 00:10:10.249 [2024-12-06T18:08:20.826Z] =================================================================================================================== 00:10:10.249 [2024-12-06T18:08:20.826Z] Total : 7746.34 30.26 0.00 0.00 16430.40 8252.68 18641.35 00:10:10.249 8590.00 IOPS, 33.55 MiB/s 00:10:10.249 Latency(us) 00:10:10.249 [2024-12-06T18:08:20.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.249 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:10.249 Nvme1n1 : 1.01 8650.36 33.79 0.00 0.00 14725.35 7233.23 24660.95 00:10:10.249 [2024-12-06T18:08:20.826Z] =================================================================================================================== 00:10:10.249 [2024-12-06T18:08:20.826Z] Total : 8650.36 33.79 0.00 0.00 14725.35 7233.23 24660.95 00:10:10.249 9754.00 IOPS, 38.10 MiB/s 00:10:10.249 Latency(us) 00:10:10.249 [2024-12-06T18:08:20.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.249 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:10.249 Nvme1n1 : 1.01 9830.12 38.40 0.00 0.00 12975.64 2500.08 19418.07 00:10:10.249 [2024-12-06T18:08:20.826Z] =================================================================================================================== 00:10:10.249 [2024-12-06T18:08:20.826Z] Total : 9830.12 38.40 0.00 0.00 12975.64 2500.08 19418.07 00:10:10.249 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1030358 00:10:10.249 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1030360 00:10:10.249 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1030363 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.507 rmmod nvme_tcp 00:10:10.507 rmmod nvme_fabrics 00:10:10.507 rmmod nvme_keyring 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1030265 ']' 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1030265 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1030265 ']' 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1030265 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1030265 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1030265' 00:10:10.507 killing process with pid 1030265 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1030265 00:10:10.507 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1030265 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.765 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.675 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.675 00:10:12.675 real 0m7.297s 00:10:12.675 user 0m15.511s 00:10:12.675 sys 0m3.833s 00:10:12.675 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.675 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.675 ************************************ 00:10:12.675 END TEST nvmf_bdev_io_wait 00:10:12.675 ************************************ 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.934 ************************************ 00:10:12.934 START TEST nvmf_queue_depth 00:10:12.934 ************************************ 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:12.934 * Looking for test storage... 00:10:12.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.934 --rc genhtml_branch_coverage=1 00:10:12.934 --rc genhtml_function_coverage=1 00:10:12.934 --rc genhtml_legend=1 00:10:12.934 --rc geninfo_all_blocks=1 00:10:12.934 --rc geninfo_unexecuted_blocks=1 00:10:12.934 00:10:12.934 ' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.934 --rc genhtml_branch_coverage=1 00:10:12.934 --rc genhtml_function_coverage=1 00:10:12.934 --rc genhtml_legend=1 00:10:12.934 --rc geninfo_all_blocks=1 00:10:12.934 --rc geninfo_unexecuted_blocks=1 00:10:12.934 00:10:12.934 ' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.934 --rc genhtml_branch_coverage=1 00:10:12.934 --rc genhtml_function_coverage=1 00:10:12.934 --rc genhtml_legend=1 00:10:12.934 --rc geninfo_all_blocks=1 00:10:12.934 --rc geninfo_unexecuted_blocks=1 00:10:12.934 00:10:12.934 ' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.934 --rc genhtml_branch_coverage=1 00:10:12.934 --rc genhtml_function_coverage=1 00:10:12.934 --rc genhtml_legend=1 00:10:12.934 --rc geninfo_all_blocks=1 00:10:12.934 --rc geninfo_unexecuted_blocks=1 00:10:12.934 00:10:12.934 ' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.934 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.935 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:15.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:15.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:15.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.461 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:15.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:10:15.462 00:10:15.462 --- 10.0.0.2 ping statistics --- 00:10:15.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.462 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:10:15.462 00:10:15.462 --- 10.0.0.1 ping statistics --- 00:10:15.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.462 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1032592 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1032592 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1032592 ']' 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.462 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.462 [2024-12-06 19:08:25.749304] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:15.462 [2024-12-06 19:08:25.749406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.462 [2024-12-06 19:08:25.826091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.462 [2024-12-06 19:08:25.885400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.462 [2024-12-06 19:08:25.885462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.462 [2024-12-06 19:08:25.885492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.462 [2024-12-06 19:08:25.885503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.462 [2024-12-06 19:08:25.885512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.462 [2024-12-06 19:08:25.886182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.462 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.462 [2024-12-06 19:08:26.035574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.718 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 Malloc0 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 [2024-12-06 19:08:26.084424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1032622 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1032622 /var/tmp/bdevperf.sock 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1032622 ']' 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:15.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.719 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 [2024-12-06 19:08:26.134453] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:15.719 [2024-12-06 19:08:26.134529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032622 ] 00:10:15.719 [2024-12-06 19:08:26.200708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.719 [2024-12-06 19:08:26.258683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.975 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.975 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:15.975 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:15.975 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.975 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.232 NVMe0n1 00:10:16.232 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.232 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:16.232 Running I/O for 10 seconds... 00:10:18.543 8461.00 IOPS, 33.05 MiB/s [2024-12-06T18:08:30.053Z] 8714.50 IOPS, 34.04 MiB/s [2024-12-06T18:08:30.987Z] 8863.33 IOPS, 34.62 MiB/s [2024-12-06T18:08:31.922Z] 8867.75 IOPS, 34.64 MiB/s [2024-12-06T18:08:32.857Z] 8894.40 IOPS, 34.74 MiB/s [2024-12-06T18:08:33.791Z] 8908.33 IOPS, 34.80 MiB/s [2024-12-06T18:08:35.167Z] 8916.29 IOPS, 34.83 MiB/s [2024-12-06T18:08:36.098Z] 8950.25 IOPS, 34.96 MiB/s [2024-12-06T18:08:37.131Z] 8974.89 IOPS, 35.06 MiB/s [2024-12-06T18:08:37.131Z] 8995.70 IOPS, 35.14 MiB/s 00:10:26.554 Latency(us) 00:10:26.554 [2024-12-06T18:08:37.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.554 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:26.554 Verification LBA range: start 0x0 length 0x4000 00:10:26.554 NVMe0n1 : 10.10 9010.41 35.20 0.00 0.00 113219.93 20971.52 71458.51 00:10:26.554 [2024-12-06T18:08:37.131Z] =================================================================================================================== 00:10:26.554 [2024-12-06T18:08:37.131Z] Total : 9010.41 35.20 0.00 0.00 113219.93 20971.52 71458.51 00:10:26.554 { 00:10:26.554 "results": [ 00:10:26.554 { 00:10:26.554 "job": "NVMe0n1", 00:10:26.554 "core_mask": "0x1", 00:10:26.554 "workload": "verify", 00:10:26.554 "status": "finished", 00:10:26.554 "verify_range": { 00:10:26.554 "start": 0, 00:10:26.554 "length": 16384 00:10:26.554 }, 00:10:26.554 "queue_depth": 1024, 00:10:26.554 "io_size": 4096, 00:10:26.554 "runtime": 10.096321, 00:10:26.554 "iops": 9010.410821922163, 00:10:26.554 "mibps": 35.19691727313345, 00:10:26.554 "io_failed": 0, 00:10:26.554 "io_timeout": 0, 00:10:26.554 "avg_latency_us": 113219.93078133932, 00:10:26.554 "min_latency_us": 20971.52, 00:10:26.554 "max_latency_us": 71458.5125925926 00:10:26.554 } 00:10:26.554 ], 00:10:26.554 "core_count": 1 00:10:26.554 } 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1032622 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1032622 ']' 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1032622 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1032622 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1032622' 00:10:26.554 killing process with pid 1032622 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1032622 00:10:26.554 Received shutdown signal, test time was about 10.000000 seconds 00:10:26.554 00:10:26.554 Latency(us) 00:10:26.554 [2024-12-06T18:08:37.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.554 [2024-12-06T18:08:37.131Z] =================================================================================================================== 00:10:26.554 [2024-12-06T18:08:37.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:26.554 19:08:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1032622 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.815 rmmod nvme_tcp 00:10:26.815 rmmod nvme_fabrics 00:10:26.815 rmmod nvme_keyring 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1032592 ']' 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1032592 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1032592 ']' 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1032592 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1032592 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1032592' 00:10:26.815 killing process with pid 1032592 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1032592 00:10:26.815 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1032592 00:10:27.074 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.075 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.981 00:10:28.981 real 0m16.215s 00:10:28.981 user 0m22.883s 00:10:28.981 sys 0m3.062s 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.981 ************************************ 00:10:28.981 END TEST nvmf_queue_depth 00:10:28.981 ************************************ 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.981 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.241 ************************************ 00:10:29.241 START TEST nvmf_target_multipath 00:10:29.241 ************************************ 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:29.241 * Looking for test storage... 00:10:29.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.241 --rc genhtml_branch_coverage=1 00:10:29.241 --rc genhtml_function_coverage=1 00:10:29.241 --rc genhtml_legend=1 00:10:29.241 --rc geninfo_all_blocks=1 00:10:29.241 --rc geninfo_unexecuted_blocks=1 00:10:29.241 00:10:29.241 ' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.241 --rc genhtml_branch_coverage=1 00:10:29.241 --rc genhtml_function_coverage=1 00:10:29.241 --rc genhtml_legend=1 00:10:29.241 --rc geninfo_all_blocks=1 00:10:29.241 --rc geninfo_unexecuted_blocks=1 00:10:29.241 00:10:29.241 ' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.241 --rc genhtml_branch_coverage=1 00:10:29.241 --rc genhtml_function_coverage=1 00:10:29.241 --rc genhtml_legend=1 00:10:29.241 --rc geninfo_all_blocks=1 00:10:29.241 --rc geninfo_unexecuted_blocks=1 00:10:29.241 00:10:29.241 ' 00:10:29.241 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.241 --rc genhtml_branch_coverage=1 00:10:29.241 --rc genhtml_function_coverage=1 00:10:29.242 --rc genhtml_legend=1 00:10:29.242 --rc geninfo_all_blocks=1 00:10:29.242 --rc geninfo_unexecuted_blocks=1 00:10:29.242 00:10:29.242 ' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.242 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.789 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:10:31.789 00:10:31.789 --- 10.0.0.2 ping statistics --- 00:10:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.789 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:10:31.789 00:10:31.789 --- 10.0.0.1 ping statistics --- 00:10:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.789 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.789 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:31.790 only one NIC for nvmf test 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.790 rmmod nvme_tcp 00:10:31.790 rmmod nvme_fabrics 00:10:31.790 rmmod nvme_keyring 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.790 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.700 00:10:33.700 real 0m4.668s 00:10:33.700 user 0m0.934s 00:10:33.700 sys 0m1.739s 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:33.700 ************************************ 00:10:33.700 END TEST nvmf_target_multipath 00:10:33.700 ************************************ 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.700 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.961 ************************************ 00:10:33.962 START TEST nvmf_zcopy 00:10:33.962 ************************************ 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.962 * Looking for test storage... 00:10:33.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.962 --rc genhtml_branch_coverage=1 00:10:33.962 --rc genhtml_function_coverage=1 00:10:33.962 --rc genhtml_legend=1 00:10:33.962 --rc geninfo_all_blocks=1 00:10:33.962 --rc geninfo_unexecuted_blocks=1 00:10:33.962 00:10:33.962 ' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.962 --rc genhtml_branch_coverage=1 00:10:33.962 --rc genhtml_function_coverage=1 00:10:33.962 --rc genhtml_legend=1 00:10:33.962 --rc geninfo_all_blocks=1 00:10:33.962 --rc geninfo_unexecuted_blocks=1 00:10:33.962 00:10:33.962 ' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.962 --rc genhtml_branch_coverage=1 00:10:33.962 --rc genhtml_function_coverage=1 00:10:33.962 --rc genhtml_legend=1 00:10:33.962 --rc geninfo_all_blocks=1 00:10:33.962 --rc geninfo_unexecuted_blocks=1 00:10:33.962 00:10:33.962 ' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.962 --rc genhtml_branch_coverage=1 00:10:33.962 --rc genhtml_function_coverage=1 00:10:33.962 --rc genhtml_legend=1 00:10:33.962 --rc geninfo_all_blocks=1 00:10:33.962 --rc geninfo_unexecuted_blocks=1 00:10:33.962 00:10:33.962 ' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.962 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.963 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:36.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:36.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:36.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:36.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.498 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:10:36.499 00:10:36.499 --- 10.0.0.2 ping statistics --- 00:10:36.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.499 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:36.499 00:10:36.499 --- 10.0.0.1 ping statistics --- 00:10:36.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.499 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1037834 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1037834 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1037834 ']' 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.499 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.499 [2024-12-06 19:08:46.950377] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:36.499 [2024-12-06 19:08:46.950473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.499 [2024-12-06 19:08:47.021070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.758 [2024-12-06 19:08:47.077872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.758 [2024-12-06 19:08:47.077924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.758 [2024-12-06 19:08:47.077961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.758 [2024-12-06 19:08:47.077973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.758 [2024-12-06 19:08:47.077984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.758 [2024-12-06 19:08:47.078709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.758 [2024-12-06 19:08:47.219275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.758 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.759 [2024-12-06 19:08:47.235467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.759 malloc0 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:36.759 { 00:10:36.759 "params": { 00:10:36.759 "name": "Nvme$subsystem", 00:10:36.759 "trtype": "$TEST_TRANSPORT", 00:10:36.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.759 "adrfam": "ipv4", 00:10:36.759 "trsvcid": "$NVMF_PORT", 00:10:36.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.759 "hdgst": ${hdgst:-false}, 00:10:36.759 "ddgst": ${ddgst:-false} 00:10:36.759 }, 00:10:36.759 "method": "bdev_nvme_attach_controller" 00:10:36.759 } 00:10:36.759 EOF 00:10:36.759 )") 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:36.759 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:36.759 "params": { 00:10:36.759 "name": "Nvme1", 00:10:36.759 "trtype": "tcp", 00:10:36.759 "traddr": "10.0.0.2", 00:10:36.759 "adrfam": "ipv4", 00:10:36.759 "trsvcid": "4420", 00:10:36.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:36.759 "hdgst": false, 00:10:36.759 "ddgst": false 00:10:36.759 }, 00:10:36.759 "method": "bdev_nvme_attach_controller" 00:10:36.759 }' 00:10:36.759 [2024-12-06 19:08:47.316315] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:36.759 [2024-12-06 19:08:47.316392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037975 ] 00:10:37.018 [2024-12-06 19:08:47.383882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.018 [2024-12-06 19:08:47.442367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.276 Running I/O for 10 seconds... 00:10:39.581 5817.00 IOPS, 45.45 MiB/s [2024-12-06T18:08:51.093Z] 5845.00 IOPS, 45.66 MiB/s [2024-12-06T18:08:52.025Z] 5842.00 IOPS, 45.64 MiB/s [2024-12-06T18:08:52.968Z] 5852.75 IOPS, 45.72 MiB/s [2024-12-06T18:08:53.898Z] 5856.20 IOPS, 45.75 MiB/s [2024-12-06T18:08:55.282Z] 5857.00 IOPS, 45.76 MiB/s [2024-12-06T18:08:56.216Z] 5858.00 IOPS, 45.77 MiB/s [2024-12-06T18:08:57.150Z] 5862.62 IOPS, 45.80 MiB/s [2024-12-06T18:08:58.085Z] 5866.22 IOPS, 45.83 MiB/s [2024-12-06T18:08:58.085Z] 5866.60 IOPS, 45.83 MiB/s 00:10:47.508 Latency(us) 00:10:47.508 [2024-12-06T18:08:58.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:47.508 Verification LBA range: start 0x0 length 0x1000 00:10:47.508 Nvme1n1 : 10.01 5871.56 45.87 0.00 0.00 21742.29 904.15 30680.56 00:10:47.508 [2024-12-06T18:08:58.085Z] =================================================================================================================== 00:10:47.508 [2024-12-06T18:08:58.085Z] Total : 5871.56 45.87 0.00 0.00 21742.29 904.15 30680.56 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1039178 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.767 { 00:10:47.767 "params": { 00:10:47.767 "name": "Nvme$subsystem", 00:10:47.767 "trtype": "$TEST_TRANSPORT", 00:10:47.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.767 "adrfam": "ipv4", 00:10:47.767 "trsvcid": "$NVMF_PORT", 00:10:47.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.767 "hdgst": ${hdgst:-false}, 00:10:47.767 "ddgst": ${ddgst:-false} 00:10:47.767 }, 00:10:47.767 "method": "bdev_nvme_attach_controller" 00:10:47.767 } 00:10:47.767 EOF 00:10:47.767 )") 00:10:47.767 [2024-12-06 19:08:58.106193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:47.767 [2024-12-06 19:08:58.106235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:47.767 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.767 "params": { 00:10:47.767 "name": "Nvme1", 00:10:47.767 "trtype": "tcp", 00:10:47.767 "traddr": "10.0.0.2", 00:10:47.767 "adrfam": "ipv4", 00:10:47.767 "trsvcid": "4420", 00:10:47.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.767 "hdgst": false, 00:10:47.767 "ddgst": false 00:10:47.767 }, 00:10:47.767 "method": "bdev_nvme_attach_controller" 00:10:47.767 }' 00:10:47.767 [2024-12-06 19:08:58.114148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.114170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.122175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.122196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.130194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.130214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.138216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.138236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.146186] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:47.767 [2024-12-06 19:08:58.146236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.146256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039178 ] 00:10:47.767 [2024-12-06 19:08:58.146269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.154258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.154279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.162278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.162297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.170301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.170320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.178323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.178342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.186347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.186368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.194368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.194396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.202391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.202412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.210411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.210431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.215316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.767 [2024-12-06 19:08:58.218433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.218453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.226505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.226541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.234509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.234543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.242500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.242520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.250521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.250541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.258543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.258562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.266563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.266583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.274585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.274605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.278299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.767 [2024-12-06 19:08:58.282608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.282628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.290632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.767 [2024-12-06 19:08:58.290678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.767 [2024-12-06 19:08:58.298722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.298755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.768 [2024-12-06 19:08:58.306739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.306775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.768 [2024-12-06 19:08:58.314766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.314804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.768 [2024-12-06 19:08:58.322815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.322855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.768 [2024-12-06 19:08:58.330822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.330859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.768 [2024-12-06 19:08:58.338826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.768 [2024-12-06 19:08:58.338869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.346808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.346832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.354877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.354915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.362891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.362929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.370912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.370963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.378895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.378918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.386917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.386940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.394965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.394992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.402982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.403006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.411004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.411043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.419040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.419063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.427057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.427078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.435072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.435092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.443091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.443111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.451109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.451128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.459135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.459155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.467161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.467182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.475183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.475205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.483205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.483226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.026 [2024-12-06 19:08:58.491224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.026 [2024-12-06 19:08:58.491250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.499250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.499274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 Running I/O for 5 seconds... 00:10:48.027 [2024-12-06 19:08:58.507274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.507297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.520675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.520703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.532075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.532103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.544829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.544875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.557467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.557494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.570026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.570053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.582618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.582645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.027 [2024-12-06 19:08:58.594873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.027 [2024-12-06 19:08:58.594900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.284 [2024-12-06 19:08:58.607078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.607106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.619011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.619052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.631359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.631385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.643662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.643713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.655635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.655662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.667555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.667582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.679237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.679265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.690757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.690784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.702561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.702587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.714147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.714174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.726216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.726244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.738083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.738110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.750226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.750253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.762234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.762261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.774153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.774193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.785828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.785856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.797348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.797375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.809316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.809343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.821098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.821124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.833151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.833192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.845037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.845064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.285 [2024-12-06 19:08:58.857367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.285 [2024-12-06 19:08:58.857395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.869969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.870011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.882360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.882386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.894888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.894915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.907241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.907268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.919181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.919208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.931402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.931429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.943780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.943807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.955721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.543 [2024-12-06 19:08:58.955751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.543 [2024-12-06 19:08:58.968214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:58.968241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:58.980276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:58.980303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:58.992193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:58.992220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.004398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.004424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.016429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.016456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.028555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.028582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.040995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.041022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.053386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.053412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.065025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.065052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.076859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.076902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.090263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.090289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.101394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.101420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.544 [2024-12-06 19:08:59.113625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.544 [2024-12-06 19:08:59.113652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.125566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.125593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.137860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.137888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.150267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.150308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.161853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.161880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.173498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.173525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.185213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.185239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.197274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.197301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.209496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.209523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.221128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.221155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.233135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.233177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.244637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.244673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.256503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.256530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.268311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.268338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.280095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.280123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.292230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.292256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.304150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.304192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.316130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.316157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.327552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.327579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.339338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.339379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.351076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.351103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.362339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.362381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.803 [2024-12-06 19:08:59.374031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.803 [2024-12-06 19:08:59.374057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.385734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.385761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.397726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.397752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.409803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.409831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.421689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.421717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.433257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.433285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.445390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.445417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.457223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.457251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.469030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.469058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.480974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.481016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.493304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.493332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.504944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.504987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 10524.00 IOPS, 82.22 MiB/s [2024-12-06T18:08:59.640Z] [2024-12-06 19:08:59.516716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.516760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.528638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.528689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.540937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.540980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.552949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.552992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.564623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.564674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.576408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.576435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.588346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.588373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.600542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.600569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.612768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.612803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.624922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.624963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.063 [2024-12-06 19:08:59.636799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.063 [2024-12-06 19:08:59.636827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.649373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.649400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.661186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.661213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.672744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.672771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.684688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.684728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.696750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.696778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.708886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.708929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.720968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.720995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.733101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.733127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.744844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.744871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.756638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.756687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.768166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.768192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.779936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.779963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.322 [2024-12-06 19:08:59.791262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.322 [2024-12-06 19:08:59.791288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.803307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.803335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.814558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.814585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.826443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.826469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.838187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.838222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.850124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.850150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.862088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.862114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.874466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.874494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.886281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.886308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.323 [2024-12-06 19:08:59.898131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.323 [2024-12-06 19:08:59.898159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.910102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.910129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.922491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.922518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.934354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.934380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.946683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.946711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.958523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.958551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.970759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.970788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.983270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.983297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:08:59.995265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:08:59.995292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.007162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.007192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.019489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.019520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.031504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.031533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.043083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.043111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.055910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.055947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.068535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.068570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.081006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.081034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.093371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.093398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.105946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.105982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.117686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.117721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.581 [2024-12-06 19:09:00.129769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.581 [2024-12-06 19:09:00.129797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.582 [2024-12-06 19:09:00.141940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.582 [2024-12-06 19:09:00.141983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.582 [2024-12-06 19:09:00.154347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.582 [2024-12-06 19:09:00.154375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.165891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.165919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.178152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.178179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.190161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.190190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.202081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.202109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.213754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.213782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.225461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.225487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.237411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.237438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.249205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.249232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.261472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.261500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.273689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.273717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.285391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.285432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.297580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.297607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.309659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.309709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.322003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.840 [2024-12-06 19:09:00.322030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.840 [2024-12-06 19:09:00.334232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.334259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.346466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.346494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.358858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.358886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.371111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.371137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.385187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.385213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.397389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.397417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.841 [2024-12-06 19:09:00.411044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.841 [2024-12-06 19:09:00.411070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.422494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.422522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.434870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.434898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.447142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.447169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.459363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.459390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.471388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.471414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.483291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.483317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.495064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.495106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.507084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.507112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 10542.00 IOPS, 82.36 MiB/s [2024-12-06T18:09:00.676Z] [2024-12-06 19:09:00.518878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.518905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.530305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.530332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.542370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.542397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.554509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.554537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.568929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.568957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.580087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.580114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.591574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.591602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.603160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.603187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.615409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.615436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.627516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.627543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.639329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.639355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.651320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.651347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.663604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.663631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.099 [2024-12-06 19:09:00.675514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.099 [2024-12-06 19:09:00.675542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.687612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.687640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.699283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.699311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.711341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.711378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.723507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.723535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.734649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.734687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.748290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.748326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.759646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.759689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.771762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.771789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.783430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.783457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.795275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.795302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.807116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.807144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.818672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.818708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.830419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.830447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.841916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.841944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.855080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.855108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.866280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.866308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.878075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.878103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.890471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.890499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.902281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.902308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.916024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.916052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.358 [2024-12-06 19:09:00.927006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.358 [2024-12-06 19:09:00.927033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:00.939325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:00.939352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:00.951415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:00.951443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:00.963772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:00.963800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:00.975830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:00.975864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:00.988095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:00.988122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.000284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.000311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.012154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.012181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.024561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.024588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.036553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.036580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.048736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.048766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.060524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.060550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.072624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.072677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.085485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.085512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.097662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.097696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.109724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.109753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.121879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.121907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.134055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.134083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.145522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.145549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.157410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.157437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.616 [2024-12-06 19:09:01.169744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.616 [2024-12-06 19:09:01.169771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.617 [2024-12-06 19:09:01.182274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.617 [2024-12-06 19:09:01.182301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.194289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.194316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.206551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.206584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.218636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.218700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.230764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.230793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.243394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.243421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.255292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.255318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.267632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.267684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.279840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.279868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.291862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.291890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.303314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.303342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.315256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.315283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.327181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.327208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.339347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.339374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.352086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.352113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.363891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.363918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.376138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.376165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.388094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.388121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.400396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.400423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.412610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.412637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.424932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.424973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.436811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.436846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.875 [2024-12-06 19:09:01.448855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.875 [2024-12-06 19:09:01.448883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 [2024-12-06 19:09:01.460810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.460838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 [2024-12-06 19:09:01.472501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.472527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 [2024-12-06 19:09:01.484551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.484578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 [2024-12-06 19:09:01.496688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.496722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 [2024-12-06 19:09:01.508493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.508520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.133 10555.00 IOPS, 82.46 MiB/s [2024-12-06T18:09:01.710Z] [2024-12-06 19:09:01.521000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.133 [2024-12-06 19:09:01.521026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.533139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.533165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.545042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.545068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.557211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.557238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.569444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.569471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.582017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.582045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.593974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.594017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.605439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.605465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.617897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.617925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.630345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.630371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.642395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.642422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.654673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.654709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.666545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.666572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.680227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.680255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.691522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.691550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.134 [2024-12-06 19:09:01.703753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.134 [2024-12-06 19:09:01.703782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.714970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.714998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.726869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.726897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.739020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.739047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.751013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.751040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.763314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.763342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.776150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.776178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.788412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.788440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.800282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.800309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.812401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.812427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.824596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.824622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.836736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.836764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.848685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.848712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.861186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.861212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.873127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.873155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.887018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.887045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.898364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.898391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.909761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.909788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.921782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.921810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.933704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.933733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.945635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.945662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.392 [2024-12-06 19:09:01.957671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.392 [2024-12-06 19:09:01.957698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:01.969737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:01.969764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:01.981561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:01.981588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:01.993458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:01.993486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.005189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.005217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.016886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.016914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.028588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.028615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.040512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.040539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.052065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.052092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.066226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.066254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.077971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.077998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.089815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.089843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.101898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.101926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.113379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.113406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.125018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.125045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.136991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.137019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.148922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.148949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.160778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.160806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.172914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.172942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.185680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.185715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.197943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.197986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.209559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.209585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.650 [2024-12-06 19:09:02.221523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.650 [2024-12-06 19:09:02.221550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.233495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.233521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.245576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.245604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.257636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.257662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.269800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.269828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.281859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.281887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.294117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.294144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.306190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.306217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.318052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.318079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.329899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.329926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.344030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.344066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.355395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.355422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.367301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.367328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.378888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.378916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.390534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.390562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.402601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.402628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.416553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.416581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.428052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.428079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.440125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.440151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.452052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.452078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.464090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.464131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.909 [2024-12-06 19:09:02.476309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.909 [2024-12-06 19:09:02.476335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.488448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.488475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.500195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.500222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.512099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.512126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 10568.75 IOPS, 82.57 MiB/s [2024-12-06T18:09:02.743Z] [2024-12-06 19:09:02.523415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.523441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.536007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.536034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.547851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.547879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.560106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.560133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.572127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.572179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.584532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.584560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.596571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.596599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.608590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.166 [2024-12-06 19:09:02.608616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.166 [2024-12-06 19:09:02.620756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.620783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.632485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.632512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.644231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.644259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.656314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.656341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.667874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.667916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.679662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.679696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.692053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.692080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.704024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.704050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.716214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.716241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.728540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.728567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.167 [2024-12-06 19:09:02.739527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.167 [2024-12-06 19:09:02.739556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.751017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.751044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.763387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.763414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.774341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.774368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.785951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.785979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.797273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.797308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.808922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.808964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.820938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.820980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.833038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.833066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.845214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.845245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.857165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.857192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.869082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.869110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.881234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.881261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.893089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.893117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.906828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.906856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.917574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.917602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.929260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.929298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.943304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.943331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.954859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.954886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.967107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.967134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.424 [2024-12-06 19:09:02.979167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.424 [2024-12-06 19:09:02.979193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.425 [2024-12-06 19:09:02.991679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.425 [2024-12-06 19:09:02.991706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.003570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.003597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.015574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.015601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.028278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.028305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.040328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.040355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.052174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.052200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.064176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.064203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.075777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.075804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.087504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.087532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.099336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.099362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.109798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.109841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.121536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.121578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.133626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.133654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.145371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.145398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.157088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.157114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.170796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.170839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.181881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.181911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.193451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.193478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.205356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.205383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.217429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.217456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.229739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.229767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.241475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.241501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.682 [2024-12-06 19:09:03.253734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.682 [2024-12-06 19:09:03.253762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.265583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.265611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.277403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.277430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.289738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.289766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.301917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.301945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.313376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.313403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.325269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.325295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.337053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.337081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.349119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.349145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.361406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.361433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.373267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.373294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.385038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.385065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.397229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.397256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.409180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.409222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.420551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.420584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.432859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.432901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.445466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.445499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.458023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.458050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.470494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.470521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.482787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.482814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.494762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.494790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.940 [2024-12-06 19:09:03.506841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.940 [2024-12-06 19:09:03.506870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.518962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.518989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 10583.20 IOPS, 82.68 MiB/s [2024-12-06T18:09:03.775Z] [2024-12-06 19:09:03.529109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.529135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 00:10:53.198 Latency(us) 00:10:53.198 [2024-12-06T18:09:03.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.198 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:53.198 Nvme1n1 : 5.01 10584.53 82.69 0.00 0.00 12076.23 5461.33 22136.60 00:10:53.198 [2024-12-06T18:09:03.775Z] =================================================================================================================== 00:10:53.198 [2024-12-06T18:09:03.775Z] Total : 10584.53 82.69 0.00 0.00 12076.23 5461.33 22136.60 00:10:53.198 [2024-12-06 19:09:03.535173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.535197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.543211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.543237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.551209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.551230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.559303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.559353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.567321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.567368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.575343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.575392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.583363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.583410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.591385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.591434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.599404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.599453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.607432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.607478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.615453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.198 [2024-12-06 19:09:03.615513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.198 [2024-12-06 19:09:03.623476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.623523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.631511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.631563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.639523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.639573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.647539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.647586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.655561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.655608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.663581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.663628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.671605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.671654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.679559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.679579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.687579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.687598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.695602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.695623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.703624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.703645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.719775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.719839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.727775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.727823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.735753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.735775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.743747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.743769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 [2024-12-06 19:09:03.751770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.199 [2024-12-06 19:09:03.751792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1039178) - No such process 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1039178 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 delay0 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.199 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.456 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.456 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:53.456 [2024-12-06 19:09:03.874558] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:01.562 [2024-12-06 19:09:10.994206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76290 is same with the state(6) to be set 00:11:01.562 [2024-12-06 19:09:10.994269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76290 is same with the state(6) to be set 00:11:01.562 Initializing NVMe Controllers 00:11:01.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:01.562 Initialization complete. Launching workers. 00:11:01.562 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 17087 00:11:01.562 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17256, failed to submit 92 00:11:01.562 success 17140, unsuccessful 116, failed 0 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.562 rmmod nvme_tcp 00:11:01.562 rmmod nvme_fabrics 00:11:01.562 rmmod nvme_keyring 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1037834 ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1037834 ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1037834' 00:11:01.562 killing process with pid 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1037834 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.562 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.563 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.939 00:11:02.939 real 0m29.085s 00:11:02.939 user 0m41.864s 00:11:02.939 sys 0m9.303s 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.939 ************************************ 00:11:02.939 END TEST nvmf_zcopy 00:11:02.939 ************************************ 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.939 ************************************ 00:11:02.939 START TEST nvmf_nmic 00:11:02.939 ************************************ 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:02.939 * Looking for test storage... 00:11:02.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.939 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.198 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.198 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.198 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.198 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.198 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.199 --rc genhtml_branch_coverage=1 00:11:03.199 --rc genhtml_function_coverage=1 00:11:03.199 --rc genhtml_legend=1 00:11:03.199 --rc geninfo_all_blocks=1 00:11:03.199 --rc geninfo_unexecuted_blocks=1 00:11:03.199 00:11:03.199 ' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.199 --rc genhtml_branch_coverage=1 00:11:03.199 --rc genhtml_function_coverage=1 00:11:03.199 --rc genhtml_legend=1 00:11:03.199 --rc geninfo_all_blocks=1 00:11:03.199 --rc geninfo_unexecuted_blocks=1 00:11:03.199 00:11:03.199 ' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.199 --rc genhtml_branch_coverage=1 00:11:03.199 --rc genhtml_function_coverage=1 00:11:03.199 --rc genhtml_legend=1 00:11:03.199 --rc geninfo_all_blocks=1 00:11:03.199 --rc geninfo_unexecuted_blocks=1 00:11:03.199 00:11:03.199 ' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.199 --rc genhtml_branch_coverage=1 00:11:03.199 --rc genhtml_function_coverage=1 00:11:03.199 --rc genhtml_legend=1 00:11:03.199 --rc geninfo_all_blocks=1 00:11:03.199 --rc geninfo_unexecuted_blocks=1 00:11:03.199 00:11:03.199 ' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.199 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.200 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.200 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.815 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:05.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:05.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:05.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:05.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:11:05.816 00:11:05.816 --- 10.0.0.2 ping statistics --- 00:11:05.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.816 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:11:05.816 00:11:05.816 --- 10.0.0.1 ping statistics --- 00:11:05.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.816 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:05.816 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1043328 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1043328 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1043328 ']' 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.817 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 [2024-12-06 19:09:15.998359] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:05.817 [2024-12-06 19:09:15.998430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.817 [2024-12-06 19:09:16.072782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.817 [2024-12-06 19:09:16.130679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.817 [2024-12-06 19:09:16.130738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.817 [2024-12-06 19:09:16.130768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.817 [2024-12-06 19:09:16.130780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.817 [2024-12-06 19:09:16.130789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.817 [2024-12-06 19:09:16.132363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.817 [2024-12-06 19:09:16.132418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.817 [2024-12-06 19:09:16.132458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.817 [2024-12-06 19:09:16.132461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 [2024-12-06 19:09:16.288143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 Malloc0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 [2024-12-06 19:09:16.358220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:05.817 test case1: single bdev can't be used in multiple subsystems 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.817 [2024-12-06 19:09:16.382052] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:05.817 [2024-12-06 19:09:16.382081] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:05.817 [2024-12-06 19:09:16.382112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.817 request: 00:11:05.817 { 00:11:05.817 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:05.817 "namespace": { 00:11:05.817 "bdev_name": "Malloc0", 00:11:05.817 "no_auto_visible": false, 00:11:05.817 "hide_metadata": false 00:11:05.817 }, 00:11:05.817 "method": "nvmf_subsystem_add_ns", 00:11:05.817 "req_id": 1 00:11:05.817 } 00:11:05.817 Got JSON-RPC error response 00:11:05.817 response: 00:11:05.817 { 00:11:05.817 "code": -32602, 00:11:05.817 "message": "Invalid parameters" 00:11:05.817 } 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:05.817 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:05.817 Adding namespace failed - expected result. 00:11:05.818 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:05.818 test case2: host connect to nvmf target in multiple paths 00:11:05.818 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:05.818 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.818 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.818 [2024-12-06 19:09:16.390165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:06.077 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.077 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.641 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:07.204 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.204 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.204 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.204 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.204 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:09.123 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.380 [global] 00:11:09.380 thread=1 00:11:09.380 invalidate=1 00:11:09.380 rw=write 00:11:09.380 time_based=1 00:11:09.380 runtime=1 00:11:09.380 ioengine=libaio 00:11:09.380 direct=1 00:11:09.380 bs=4096 00:11:09.380 iodepth=1 00:11:09.380 norandommap=0 00:11:09.380 numjobs=1 00:11:09.380 00:11:09.380 verify_dump=1 00:11:09.380 verify_backlog=512 00:11:09.380 verify_state_save=0 00:11:09.380 do_verify=1 00:11:09.380 verify=crc32c-intel 00:11:09.380 [job0] 00:11:09.380 filename=/dev/nvme0n1 00:11:09.380 Could not set queue depth (nvme0n1) 00:11:09.380 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.380 fio-3.35 00:11:09.380 Starting 1 thread 00:11:10.750 00:11:10.750 job0: (groupid=0, jobs=1): err= 0: pid=1043853: Fri Dec 6 19:09:21 2024 00:11:10.750 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:10.750 slat (nsec): min=5144, max=59981, avg=12256.27, stdev=5985.23 00:11:10.750 clat (usec): min=197, max=3368, avg=254.40, stdev=75.48 00:11:10.750 lat (usec): min=203, max=3375, avg=266.66, stdev=76.05 00:11:10.750 clat percentiles (usec): 00:11:10.750 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 237], 00:11:10.750 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:11:10.750 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:11:10.750 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 979], 99.95th=[ 1057], 00:11:10.750 | 99.99th=[ 3359] 00:11:10.750 write: IOPS=2241, BW=8967KiB/s (9182kB/s)(8976KiB/1001msec); 0 zone resets 00:11:10.750 slat (nsec): min=6099, max=57010, avg=14288.82, stdev=7003.36 00:11:10.750 clat (usec): min=128, max=1609, avg=180.45, stdev=48.55 00:11:10.750 lat (usec): min=136, max=1649, avg=194.73, stdev=50.32 00:11:10.750 clat percentiles (usec): 00:11:10.750 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:11:10.750 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:10.750 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:11:10.750 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 1106], 99.95th=[ 1139], 00:11:10.750 | 99.99th=[ 1614] 00:11:10.750 bw ( KiB/s): min= 8784, max= 8784, per=97.96%, avg=8784.00, stdev= 0.00, samples=1 00:11:10.750 iops : min= 2196, max= 2196, avg=2196.00, stdev= 0.00, samples=1 00:11:10.750 lat (usec) : 250=74.58%, 500=25.26%, 1000=0.05% 00:11:10.751 lat (msec) : 2=0.09%, 4=0.02% 00:11:10.751 cpu : usr=5.00%, sys=7.40%, ctx=4292, majf=0, minf=1 00:11:10.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.751 issued rwts: total=2048,2244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.751 00:11:10.751 Run status group 0 (all jobs): 00:11:10.751 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:11:10.751 WRITE: bw=8967KiB/s (9182kB/s), 8967KiB/s-8967KiB/s (9182kB/s-9182kB/s), io=8976KiB (9191kB), run=1001-1001msec 00:11:10.751 00:11:10.751 Disk stats (read/write): 00:11:10.751 nvme0n1: ios=1833/2048, merge=0/0, ticks=429/338, in_queue=767, util=91.58% 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.751 rmmod nvme_tcp 00:11:10.751 rmmod nvme_fabrics 00:11:10.751 rmmod nvme_keyring 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1043328 ']' 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1043328 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1043328 ']' 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1043328 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.751 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043328 00:11:11.009 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.009 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.009 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043328' 00:11:11.009 killing process with pid 1043328 00:11:11.009 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1043328 00:11:11.009 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1043328 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.269 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.177 00:11:13.177 real 0m10.239s 00:11:13.177 user 0m22.848s 00:11:13.177 sys 0m2.624s 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.177 ************************************ 00:11:13.177 END TEST nvmf_nmic 00:11:13.177 ************************************ 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.177 ************************************ 00:11:13.177 START TEST nvmf_fio_target 00:11:13.177 ************************************ 00:11:13.177 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:13.437 * Looking for test storage... 00:11:13.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.437 --rc genhtml_branch_coverage=1 00:11:13.437 --rc genhtml_function_coverage=1 00:11:13.437 --rc genhtml_legend=1 00:11:13.437 --rc geninfo_all_blocks=1 00:11:13.437 --rc geninfo_unexecuted_blocks=1 00:11:13.437 00:11:13.437 ' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.437 --rc genhtml_branch_coverage=1 00:11:13.437 --rc genhtml_function_coverage=1 00:11:13.437 --rc genhtml_legend=1 00:11:13.437 --rc geninfo_all_blocks=1 00:11:13.437 --rc geninfo_unexecuted_blocks=1 00:11:13.437 00:11:13.437 ' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.437 --rc genhtml_branch_coverage=1 00:11:13.437 --rc genhtml_function_coverage=1 00:11:13.437 --rc genhtml_legend=1 00:11:13.437 --rc geninfo_all_blocks=1 00:11:13.437 --rc geninfo_unexecuted_blocks=1 00:11:13.437 00:11:13.437 ' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.437 --rc genhtml_branch_coverage=1 00:11:13.437 --rc genhtml_function_coverage=1 00:11:13.437 --rc genhtml_legend=1 00:11:13.437 --rc geninfo_all_blocks=1 00:11:13.437 --rc geninfo_unexecuted_blocks=1 00:11:13.437 00:11:13.437 ' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.437 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.438 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:15.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:15.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:15.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.968 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:15.969 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:15.969 00:11:15.969 --- 10.0.0.2 ping statistics --- 00:11:15.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.969 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:15.969 00:11:15.969 --- 10.0.0.1 ping statistics --- 00:11:15.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.969 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1046056 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1046056 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1046056 ']' 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.969 [2024-12-06 19:09:26.252684] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:15.969 [2024-12-06 19:09:26.252767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.969 [2024-12-06 19:09:26.319302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.969 [2024-12-06 19:09:26.373939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.969 [2024-12-06 19:09:26.373989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.969 [2024-12-06 19:09:26.374019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.969 [2024-12-06 19:09:26.374037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.969 [2024-12-06 19:09:26.374049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.969 [2024-12-06 19:09:26.375758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.969 [2024-12-06 19:09:26.375812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.969 [2024-12-06 19:09:26.375838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.969 [2024-12-06 19:09:26.375841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.969 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:16.534 [2024-12-06 19:09:26.824782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.534 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.792 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:16.792 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.050 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:17.050 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.308 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:17.308 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.566 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:17.566 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:17.823 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.081 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:18.081 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.339 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:18.339 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.905 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:18.905 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:18.905 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.163 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:19.163 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:19.421 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:19.421 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.987 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.987 [2024-12-06 19:09:30.510388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.987 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:20.244 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:20.503 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:21.437 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:23.333 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:23.333 [global] 00:11:23.333 thread=1 00:11:23.333 invalidate=1 00:11:23.333 rw=write 00:11:23.333 time_based=1 00:11:23.333 runtime=1 00:11:23.333 ioengine=libaio 00:11:23.333 direct=1 00:11:23.333 bs=4096 00:11:23.333 iodepth=1 00:11:23.333 norandommap=0 00:11:23.333 numjobs=1 00:11:23.333 00:11:23.333 verify_dump=1 00:11:23.333 verify_backlog=512 00:11:23.333 verify_state_save=0 00:11:23.333 do_verify=1 00:11:23.333 verify=crc32c-intel 00:11:23.333 [job0] 00:11:23.333 filename=/dev/nvme0n1 00:11:23.333 [job1] 00:11:23.333 filename=/dev/nvme0n2 00:11:23.333 [job2] 00:11:23.333 filename=/dev/nvme0n3 00:11:23.333 [job3] 00:11:23.333 filename=/dev/nvme0n4 00:11:23.333 Could not set queue depth (nvme0n1) 00:11:23.333 Could not set queue depth (nvme0n2) 00:11:23.333 Could not set queue depth (nvme0n3) 00:11:23.333 Could not set queue depth (nvme0n4) 00:11:23.590 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.590 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.590 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.590 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.590 fio-3.35 00:11:23.590 Starting 4 threads 00:11:24.958 00:11:24.958 job0: (groupid=0, jobs=1): err= 0: pid=1047129: Fri Dec 6 19:09:35 2024 00:11:24.958 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:11:24.958 slat (nsec): min=14064, max=34665, avg=23444.78, stdev=8940.98 00:11:24.958 clat (usec): min=227, max=41928, avg=39243.04, stdev=8507.61 00:11:24.958 lat (usec): min=241, max=41961, avg=39266.48, stdev=8509.53 00:11:24.958 clat percentiles (usec): 00:11:24.958 | 1.00th=[ 227], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:24.958 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:24.958 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:24.958 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:24.958 | 99.99th=[41681] 00:11:24.958 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:11:24.958 slat (nsec): min=8712, max=65677, avg=20529.67, stdev=8823.26 00:11:24.958 clat (usec): min=139, max=1361, avg=228.58, stdev=85.28 00:11:24.958 lat (usec): min=155, max=1379, avg=249.11, stdev=89.31 00:11:24.958 clat percentiles (usec): 00:11:24.958 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 178], 00:11:24.958 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:11:24.958 | 70.00th=[ 225], 80.00th=[ 277], 90.00th=[ 343], 95.00th=[ 375], 00:11:24.958 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 1369], 99.95th=[ 1369], 00:11:24.958 | 99.99th=[ 1369] 00:11:24.958 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.958 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.958 lat (usec) : 250=73.64%, 500=21.87%, 750=0.19% 00:11:24.958 lat (msec) : 2=0.19%, 50=4.11% 00:11:24.958 cpu : usr=0.68%, sys=0.77%, ctx=537, majf=0, minf=1 00:11:24.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.958 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.958 job1: (groupid=0, jobs=1): err= 0: pid=1047131: Fri Dec 6 19:09:35 2024 00:11:24.958 read: IOPS=442, BW=1770KiB/s (1813kB/s)(1820KiB/1028msec) 00:11:24.958 slat (nsec): min=5981, max=23497, avg=7970.04, stdev=2733.32 00:11:24.958 clat (usec): min=180, max=42025, avg=2000.42, stdev=8265.22 00:11:24.958 lat (usec): min=193, max=42044, avg=2008.39, stdev=8266.57 00:11:24.958 clat percentiles (usec): 00:11:24.958 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:11:24.958 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:11:24.958 | 70.00th=[ 229], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 914], 00:11:24.958 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:24.958 | 99.99th=[42206] 00:11:24.958 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:24.958 slat (nsec): min=6989, max=30323, avg=11365.45, stdev=3395.24 00:11:24.958 clat (usec): min=143, max=304, avg=202.66, stdev=24.11 00:11:24.958 lat (usec): min=151, max=318, avg=214.03, stdev=24.74 00:11:24.958 clat percentiles (usec): 00:11:24.958 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:11:24.958 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:11:24.958 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 241], 00:11:24.958 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 306], 00:11:24.958 | 99.99th=[ 306] 00:11:24.958 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.958 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.958 lat (usec) : 250=85.83%, 500=11.69%, 1000=0.31% 00:11:24.958 lat (msec) : 2=0.10%, 50=2.07% 00:11:24.958 cpu : usr=0.39%, sys=0.78%, ctx=968, majf=0, minf=1 00:11:24.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.958 issued rwts: total=455,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.958 job2: (groupid=0, jobs=1): err= 0: pid=1047137: Fri Dec 6 19:09:35 2024 00:11:24.958 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:11:24.958 slat (nsec): min=14139, max=34270, avg=23278.43, stdev=8895.91 00:11:24.958 clat (usec): min=234, max=42018, avg=39599.61, stdev=8595.16 00:11:24.958 lat (usec): min=251, max=42036, avg=39622.89, stdev=8596.49 00:11:24.958 clat percentiles (usec): 00:11:24.958 | 1.00th=[ 235], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:24.958 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:24.958 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:24.958 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:24.958 | 99.99th=[42206] 00:11:24.958 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:11:24.958 slat (nsec): min=9770, max=48004, avg=21431.69, stdev=5357.28 00:11:24.958 clat (usec): min=156, max=418, avg=220.42, stdev=44.92 00:11:24.958 lat (usec): min=174, max=442, avg=241.85, stdev=44.47 00:11:24.958 clat percentiles (usec): 00:11:24.959 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:11:24.959 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:11:24.959 | 70.00th=[ 217], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 310], 00:11:24.959 | 99.00th=[ 351], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 420], 00:11:24.959 | 99.99th=[ 420] 00:11:24.959 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.959 lat (usec) : 250=74.21%, 500=21.68% 00:11:24.959 lat (msec) : 50=4.11% 00:11:24.959 cpu : usr=0.87%, sys=1.06%, ctx=537, majf=0, minf=1 00:11:24.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.959 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.959 job3: (groupid=0, jobs=1): err= 0: pid=1047138: Fri Dec 6 19:09:35 2024 00:11:24.959 read: IOPS=2138, BW=8555KiB/s (8761kB/s)(8564KiB/1001msec) 00:11:24.959 slat (nsec): min=4540, max=17394, avg=6414.24, stdev=1503.93 00:11:24.959 clat (usec): min=171, max=662, avg=237.65, stdev=43.00 00:11:24.959 lat (usec): min=177, max=669, avg=244.07, stdev=43.59 00:11:24.959 clat percentiles (usec): 00:11:24.959 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 210], 00:11:24.959 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:11:24.959 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:11:24.959 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 652], 00:11:24.959 | 99.99th=[ 660] 00:11:24.959 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:24.959 slat (nsec): min=6108, max=33683, avg=8390.12, stdev=1968.26 00:11:24.959 clat (usec): min=126, max=356, avg=174.42, stdev=37.32 00:11:24.959 lat (usec): min=133, max=374, avg=182.81, stdev=38.22 00:11:24.959 clat percentiles (usec): 00:11:24.959 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:11:24.959 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:11:24.959 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 219], 95.00th=[ 253], 00:11:24.959 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 355], 00:11:24.959 | 99.99th=[ 359] 00:11:24.959 bw ( KiB/s): min= 9736, max= 9736, per=61.80%, avg=9736.00, stdev= 0.00, samples=1 00:11:24.959 iops : min= 2434, max= 2434, avg=2434.00, stdev= 0.00, samples=1 00:11:24.959 lat (usec) : 250=85.45%, 500=14.21%, 750=0.34% 00:11:24.959 cpu : usr=1.60%, sys=3.60%, ctx=4703, majf=0, minf=1 00:11:24.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.959 issued rwts: total=2141,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.959 00:11:24.959 Run status group 0 (all jobs): 00:11:24.959 READ: bw=9.92MiB/s (10.4MB/s), 88.5KiB/s-8555KiB/s (90.6kB/s-8761kB/s), io=10.3MiB (10.8MB), run=1001-1040msec 00:11:24.959 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:11:24.959 00:11:24.959 Disk stats (read/write): 00:11:24.959 nvme0n1: ios=67/512, merge=0/0, ticks=1134/111, in_queue=1245, util=85.07% 00:11:24.959 nvme0n2: ios=499/512, merge=0/0, ticks=1463/102, in_queue=1565, util=89.01% 00:11:24.959 nvme0n3: ios=75/512, merge=0/0, ticks=792/106, in_queue=898, util=94.97% 00:11:24.959 nvme0n4: ios=1966/2048, merge=0/0, ticks=531/360, in_queue=891, util=95.77% 00:11:24.959 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:24.959 [global] 00:11:24.959 thread=1 00:11:24.959 invalidate=1 00:11:24.959 rw=randwrite 00:11:24.959 time_based=1 00:11:24.959 runtime=1 00:11:24.959 ioengine=libaio 00:11:24.959 direct=1 00:11:24.959 bs=4096 00:11:24.959 iodepth=1 00:11:24.959 norandommap=0 00:11:24.959 numjobs=1 00:11:24.959 00:11:24.959 verify_dump=1 00:11:24.959 verify_backlog=512 00:11:24.959 verify_state_save=0 00:11:24.959 do_verify=1 00:11:24.959 verify=crc32c-intel 00:11:24.959 [job0] 00:11:24.959 filename=/dev/nvme0n1 00:11:24.959 [job1] 00:11:24.959 filename=/dev/nvme0n2 00:11:24.959 [job2] 00:11:24.959 filename=/dev/nvme0n3 00:11:24.959 [job3] 00:11:24.959 filename=/dev/nvme0n4 00:11:24.959 Could not set queue depth (nvme0n1) 00:11:24.959 Could not set queue depth (nvme0n2) 00:11:24.959 Could not set queue depth (nvme0n3) 00:11:24.959 Could not set queue depth (nvme0n4) 00:11:24.959 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.959 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.959 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.959 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.959 fio-3.35 00:11:24.959 Starting 4 threads 00:11:26.335 00:11:26.335 job0: (groupid=0, jobs=1): err= 0: pid=1047364: Fri Dec 6 19:09:36 2024 00:11:26.335 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:11:26.335 slat (nsec): min=8223, max=34962, avg=27277.38, stdev=10011.59 00:11:26.335 clat (usec): min=40926, max=42043, avg=41157.26, stdev=402.60 00:11:26.335 lat (usec): min=40959, max=42051, avg=41184.53, stdev=402.52 00:11:26.335 clat percentiles (usec): 00:11:26.335 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:26.335 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:26.335 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:26.335 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.335 | 99.99th=[42206] 00:11:26.335 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:26.335 slat (nsec): min=7530, max=40583, avg=16349.07, stdev=6495.68 00:11:26.336 clat (usec): min=159, max=832, avg=253.37, stdev=55.55 00:11:26.336 lat (usec): min=178, max=842, avg=269.72, stdev=53.10 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:11:26.336 | 30.00th=[ 215], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 273], 00:11:26.336 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 343], 00:11:26.336 | 99.00th=[ 392], 99.50th=[ 424], 99.90th=[ 832], 99.95th=[ 832], 00:11:26.336 | 99.99th=[ 832] 00:11:26.336 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.336 lat (usec) : 250=46.34%, 500=49.53%, 1000=0.19% 00:11:26.336 lat (msec) : 50=3.94% 00:11:26.336 cpu : usr=0.20%, sys=1.09%, ctx=535, majf=0, minf=1 00:11:26.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.336 job1: (groupid=0, jobs=1): err= 0: pid=1047365: Fri Dec 6 19:09:36 2024 00:11:26.336 read: IOPS=26, BW=107KiB/s (110kB/s)(112KiB/1042msec) 00:11:26.336 slat (nsec): min=8306, max=60558, avg=25007.57, stdev=13799.98 00:11:26.336 clat (usec): min=334, max=41989, avg=32824.23, stdev=17184.71 00:11:26.336 lat (usec): min=344, max=42025, avg=32849.23, stdev=17192.60 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 334], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 1037], 00:11:26.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:26.336 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:26.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.336 | 99.99th=[42206] 00:11:26.336 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:11:26.336 slat (nsec): min=9010, max=47534, avg=12438.78, stdev=4213.13 00:11:26.336 clat (usec): min=165, max=354, avg=220.74, stdev=22.96 00:11:26.336 lat (usec): min=176, max=373, avg=233.18, stdev=23.99 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 202], 00:11:26.336 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:11:26.336 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 258], 00:11:26.336 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 355], 99.95th=[ 355], 00:11:26.336 | 99.99th=[ 355] 00:11:26.336 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.336 lat (usec) : 250=86.67%, 500=9.07% 00:11:26.336 lat (msec) : 2=0.19%, 50=4.07% 00:11:26.336 cpu : usr=0.29%, sys=1.06%, ctx=542, majf=0, minf=1 00:11:26.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.336 job2: (groupid=0, jobs=1): err= 0: pid=1047366: Fri Dec 6 19:09:36 2024 00:11:26.336 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:11:26.336 slat (nsec): min=8234, max=34140, avg=25471.65, stdev=10443.49 00:11:26.336 clat (usec): min=267, max=42073, avg=39685.71, stdev=8607.68 00:11:26.336 lat (usec): min=275, max=42107, avg=39711.18, stdev=8611.40 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:26.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:26.336 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:26.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.336 | 99.99th=[42206] 00:11:26.336 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:26.336 slat (nsec): min=8543, max=54865, avg=11757.11, stdev=4403.23 00:11:26.336 clat (usec): min=154, max=273, avg=189.70, stdev=16.69 00:11:26.336 lat (usec): min=165, max=286, avg=201.46, stdev=17.95 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:11:26.336 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:11:26.336 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 221], 00:11:26.336 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 273], 00:11:26.336 | 99.99th=[ 273] 00:11:26.336 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.336 lat (usec) : 250=95.51%, 500=0.37% 00:11:26.336 lat (msec) : 50=4.11% 00:11:26.336 cpu : usr=0.59%, sys=0.69%, ctx=536, majf=0, minf=1 00:11:26.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.336 job3: (groupid=0, jobs=1): err= 0: pid=1047367: Fri Dec 6 19:09:36 2024 00:11:26.336 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:11:26.336 slat (nsec): min=13377, max=36065, avg=28004.14, stdev=9947.42 00:11:26.336 clat (usec): min=40548, max=41990, avg=41226.93, stdev=483.97 00:11:26.336 lat (usec): min=40565, max=42026, avg=41254.94, stdev=486.86 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:26.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:26.336 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:26.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.336 | 99.99th=[42206] 00:11:26.336 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:26.336 slat (nsec): min=8228, max=64033, avg=14808.62, stdev=6835.98 00:11:26.336 clat (usec): min=160, max=728, avg=252.59, stdev=51.35 00:11:26.336 lat (usec): min=170, max=738, avg=267.40, stdev=52.54 00:11:26.336 clat percentiles (usec): 00:11:26.336 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 202], 20.00th=[ 219], 00:11:26.336 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:11:26.336 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 347], 00:11:26.336 | 99.00th=[ 408], 99.50th=[ 453], 99.90th=[ 725], 99.95th=[ 725], 00:11:26.336 | 99.99th=[ 725] 00:11:26.336 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.336 lat (usec) : 250=57.04%, 500=38.84%, 750=0.19% 00:11:26.336 lat (msec) : 50=3.94% 00:11:26.336 cpu : usr=0.20%, sys=0.90%, ctx=536, majf=0, minf=1 00:11:26.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.336 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.336 00:11:26.336 Run status group 0 (all jobs): 00:11:26.336 READ: bw=357KiB/s (366kB/s), 83.5KiB/s-107KiB/s (85.5kB/s-110kB/s), io=372KiB (381kB), run=1006-1042msec 00:11:26.336 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2036KiB/s (2013kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1042msec 00:11:26.336 00:11:26.336 Disk stats (read/write): 00:11:26.336 nvme0n1: ios=70/512, merge=0/0, ticks=1341/128, in_queue=1469, util=90.38% 00:11:26.336 nvme0n2: ios=69/512, merge=0/0, ticks=997/107, in_queue=1104, util=97.06% 00:11:26.336 nvme0n3: ios=76/512, merge=0/0, ticks=824/87, in_queue=911, util=91.26% 00:11:26.336 nvme0n4: ios=76/512, merge=0/0, ticks=1476/122, in_queue=1598, util=98.64% 00:11:26.336 19:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:26.336 [global] 00:11:26.336 thread=1 00:11:26.336 invalidate=1 00:11:26.336 rw=write 00:11:26.336 time_based=1 00:11:26.336 runtime=1 00:11:26.336 ioengine=libaio 00:11:26.336 direct=1 00:11:26.336 bs=4096 00:11:26.336 iodepth=128 00:11:26.336 norandommap=0 00:11:26.336 numjobs=1 00:11:26.336 00:11:26.336 verify_dump=1 00:11:26.336 verify_backlog=512 00:11:26.336 verify_state_save=0 00:11:26.336 do_verify=1 00:11:26.336 verify=crc32c-intel 00:11:26.336 [job0] 00:11:26.336 filename=/dev/nvme0n1 00:11:26.336 [job1] 00:11:26.336 filename=/dev/nvme0n2 00:11:26.336 [job2] 00:11:26.336 filename=/dev/nvme0n3 00:11:26.336 [job3] 00:11:26.336 filename=/dev/nvme0n4 00:11:26.336 Could not set queue depth (nvme0n1) 00:11:26.336 Could not set queue depth (nvme0n2) 00:11:26.336 Could not set queue depth (nvme0n3) 00:11:26.336 Could not set queue depth (nvme0n4) 00:11:26.595 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.595 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.595 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.595 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.595 fio-3.35 00:11:26.595 Starting 4 threads 00:11:27.972 00:11:27.972 job0: (groupid=0, jobs=1): err= 0: pid=1047709: Fri Dec 6 19:09:38 2024 00:11:27.972 read: IOPS=5627, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:27.972 slat (usec): min=2, max=11568, avg=93.86, stdev=663.76 00:11:27.972 clat (usec): min=1961, max=29590, avg=11793.87, stdev=4197.89 00:11:27.972 lat (usec): min=1965, max=29595, avg=11887.73, stdev=4224.45 00:11:27.972 clat percentiles (usec): 00:11:27.972 | 1.00th=[ 3490], 5.00th=[ 5669], 10.00th=[ 8291], 20.00th=[ 9634], 00:11:27.972 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:11:27.972 | 70.00th=[12256], 80.00th=[13698], 90.00th=[18220], 95.00th=[19792], 00:11:27.972 | 99.00th=[25560], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:11:27.972 | 99.99th=[29492] 00:11:27.972 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:11:27.972 slat (usec): min=3, max=9229, avg=59.36, stdev=341.75 00:11:27.972 clat (usec): min=190, max=23136, avg=9859.07, stdev=3107.69 00:11:27.972 lat (usec): min=334, max=23144, avg=9918.43, stdev=3132.34 00:11:27.972 clat percentiles (usec): 00:11:27.972 | 1.00th=[ 1909], 5.00th=[ 4015], 10.00th=[ 5342], 20.00th=[ 8029], 00:11:27.972 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[10945], 00:11:27.972 | 70.00th=[11076], 80.00th=[11338], 90.00th=[12256], 95.00th=[14877], 00:11:27.972 | 99.00th=[19530], 99.50th=[20841], 99.90th=[23200], 99.95th=[23200], 00:11:27.972 | 99.99th=[23200] 00:11:27.972 bw ( KiB/s): min=21616, max=26616, per=36.68%, avg=24116.00, stdev=3535.53, samples=2 00:11:27.972 iops : min= 5404, max= 6654, avg=6029.00, stdev=883.88, samples=2 00:11:27.972 lat (usec) : 250=0.01%, 500=0.04%, 750=0.07%, 1000=0.04% 00:11:27.972 lat (msec) : 2=0.48%, 4=2.54%, 10=30.66%, 20=63.44%, 50=2.73% 00:11:27.972 cpu : usr=5.29%, sys=6.69%, ctx=596, majf=0, minf=1 00:11:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.972 issued rwts: total=5644,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.972 job1: (groupid=0, jobs=1): err= 0: pid=1047710: Fri Dec 6 19:09:38 2024 00:11:27.972 read: IOPS=5196, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec) 00:11:27.972 slat (usec): min=2, max=6831, avg=92.22, stdev=523.86 00:11:27.972 clat (usec): min=2620, max=22485, avg=11216.33, stdev=2090.01 00:11:27.972 lat (usec): min=5614, max=22490, avg=11308.55, stdev=2133.37 00:11:27.972 clat percentiles (usec): 00:11:27.972 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765], 00:11:27.972 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:11:27.972 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13435], 95.00th=[14877], 00:11:27.972 | 99.00th=[18482], 99.50th=[21365], 99.90th=[22152], 99.95th=[22414], 00:11:27.972 | 99.99th=[22414] 00:11:27.972 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:27.972 slat (usec): min=3, max=7842, avg=82.77, stdev=338.18 00:11:27.972 clat (usec): min=1073, max=34695, avg=12189.15, stdev=4465.27 00:11:27.972 lat (usec): min=1082, max=34860, avg=12271.92, stdev=4495.83 00:11:27.972 clat percentiles (usec): 00:11:27.972 | 1.00th=[ 4948], 5.00th=[ 7373], 10.00th=[ 8979], 20.00th=[10159], 00:11:27.972 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:11:27.972 | 70.00th=[11994], 80.00th=[12780], 90.00th=[16450], 95.00th=[22152], 00:11:27.972 | 99.00th=[31851], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:11:27.972 | 99.99th=[34866] 00:11:27.972 bw ( KiB/s): min=20480, max=24328, per=34.07%, avg=22404.00, stdev=2720.95, samples=2 00:11:27.972 iops : min= 5120, max= 6082, avg=5601.00, stdev=680.24, samples=2 00:11:27.972 lat (msec) : 2=0.19%, 4=0.26%, 10=19.95%, 20=75.37%, 50=4.23% 00:11:27.972 cpu : usr=5.78%, sys=10.47%, ctx=700, majf=0, minf=1 00:11:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.972 issued rwts: total=5217,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.972 job2: (groupid=0, jobs=1): err= 0: pid=1047713: Fri Dec 6 19:09:38 2024 00:11:27.972 read: IOPS=2013, BW=8055KiB/s (8248kB/s)(8192KiB/1017msec) 00:11:27.972 slat (usec): min=3, max=13665, avg=156.45, stdev=963.89 00:11:27.972 clat (usec): min=6617, max=52177, avg=16974.08, stdev=6440.32 00:11:27.972 lat (usec): min=6637, max=52187, avg=17130.53, stdev=6525.23 00:11:27.972 clat percentiles (usec): 00:11:27.972 | 1.00th=[ 7308], 5.00th=[10945], 10.00th=[12387], 20.00th=[13960], 00:11:27.972 | 30.00th=[14091], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:11:27.972 | 70.00th=[15401], 80.00th=[20579], 90.00th=[27395], 95.00th=[31851], 00:11:27.972 | 99.00th=[37487], 99.50th=[41681], 99.90th=[52167], 99.95th=[52167], 00:11:27.972 | 99.99th=[52167] 00:11:27.972 write: IOPS=2263, BW=9054KiB/s (9271kB/s)(9208KiB/1017msec); 0 zone resets 00:11:27.972 slat (usec): min=4, max=15514, avg=286.86, stdev=1293.81 00:11:27.972 clat (usec): min=1367, max=124229, avg=40934.54, stdev=25779.94 00:11:27.972 lat (usec): min=1389, max=124238, avg=41221.40, stdev=25880.28 00:11:27.972 clat percentiles (msec): 00:11:27.972 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 26], 00:11:27.972 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 40], 00:11:27.972 | 70.00th=[ 50], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 92], 00:11:27.972 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:11:27.972 | 99.99th=[ 125] 00:11:27.973 bw ( KiB/s): min= 8512, max= 8880, per=13.23%, avg=8696.00, stdev=260.22, samples=2 00:11:27.973 iops : min= 2128, max= 2220, avg=2174.00, stdev=65.05, samples=2 00:11:27.973 lat (msec) : 2=0.44%, 4=0.46%, 10=4.07%, 20=38.94%, 50=40.41% 00:11:27.973 lat (msec) : 100=14.23%, 250=1.45% 00:11:27.973 cpu : usr=2.56%, sys=4.82%, ctx=301, majf=0, minf=2 00:11:27.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:27.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.973 issued rwts: total=2048,2302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.973 job3: (groupid=0, jobs=1): err= 0: pid=1047714: Fri Dec 6 19:09:38 2024 00:11:27.973 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:11:27.973 slat (usec): min=3, max=13549, avg=198.08, stdev=1167.01 00:11:27.973 clat (usec): min=6843, max=73620, avg=18732.74, stdev=11656.07 00:11:27.973 lat (usec): min=6850, max=73678, avg=18930.82, stdev=11841.52 00:11:27.973 clat percentiles (usec): 00:11:27.973 | 1.00th=[ 8586], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:11:27.973 | 30.00th=[13698], 40.00th=[14353], 50.00th=[14746], 60.00th=[15795], 00:11:27.973 | 70.00th=[16188], 80.00th=[20055], 90.00th=[25822], 95.00th=[47449], 00:11:27.973 | 99.00th=[71828], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:11:27.973 | 99.99th=[73925] 00:11:27.973 write: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1014msec); 0 zone resets 00:11:27.973 slat (usec): min=4, max=13716, avg=177.11, stdev=855.36 00:11:27.973 clat (usec): min=4156, max=73577, avg=30478.10, stdev=18337.67 00:11:27.973 lat (usec): min=4164, max=73585, avg=30655.21, stdev=18405.54 00:11:27.973 clat percentiles (usec): 00:11:27.973 | 1.00th=[ 5735], 5.00th=[11600], 10.00th=[13042], 20.00th=[14877], 00:11:27.973 | 30.00th=[16581], 40.00th=[24249], 50.00th=[26608], 60.00th=[27132], 00:11:27.973 | 70.00th=[30278], 80.00th=[43254], 90.00th=[68682], 95.00th=[70779], 00:11:27.973 | 99.00th=[71828], 99.50th=[71828], 99.90th=[72877], 99.95th=[73925], 00:11:27.973 | 99.99th=[73925] 00:11:27.973 bw ( KiB/s): min= 8192, max=12312, per=15.59%, avg=10252.00, stdev=2913.28, samples=2 00:11:27.973 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:11:27.973 lat (msec) : 10=2.75%, 20=53.06%, 50=33.71%, 100=10.48% 00:11:27.973 cpu : usr=2.86%, sys=6.02%, ctx=305, majf=0, minf=1 00:11:27.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:27.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.973 issued rwts: total=2560,2640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.973 00:11:27.973 Run status group 0 (all jobs): 00:11:27.973 READ: bw=59.4MiB/s (62.3MB/s), 8055KiB/s-22.0MiB/s (8248kB/s-23.0MB/s), io=60.4MiB (63.4MB), run=1003-1017msec 00:11:27.973 WRITE: bw=64.2MiB/s (67.3MB/s), 9054KiB/s-23.9MiB/s (9271kB/s-25.1MB/s), io=65.3MiB (68.5MB), run=1003-1017msec 00:11:27.973 00:11:27.973 Disk stats (read/write): 00:11:27.973 nvme0n1: ios=4651/5042, merge=0/0, ticks=41848/46542, in_queue=88390, util=97.90% 00:11:27.973 nvme0n2: ios=4525/4608, merge=0/0, ticks=25026/27831, in_queue=52857, util=97.76% 00:11:27.973 nvme0n3: ios=1587/1959, merge=0/0, ticks=24904/78450, in_queue=103354, util=99.27% 00:11:27.973 nvme0n4: ios=2099/2343, merge=0/0, ticks=36614/66615, in_queue=103229, util=98.00% 00:11:27.973 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:27.973 [global] 00:11:27.973 thread=1 00:11:27.973 invalidate=1 00:11:27.973 rw=randwrite 00:11:27.973 time_based=1 00:11:27.973 runtime=1 00:11:27.973 ioengine=libaio 00:11:27.973 direct=1 00:11:27.973 bs=4096 00:11:27.973 iodepth=128 00:11:27.973 norandommap=0 00:11:27.973 numjobs=1 00:11:27.973 00:11:27.973 verify_dump=1 00:11:27.973 verify_backlog=512 00:11:27.973 verify_state_save=0 00:11:27.973 do_verify=1 00:11:27.973 verify=crc32c-intel 00:11:27.973 [job0] 00:11:27.973 filename=/dev/nvme0n1 00:11:27.973 [job1] 00:11:27.973 filename=/dev/nvme0n2 00:11:27.973 [job2] 00:11:27.973 filename=/dev/nvme0n3 00:11:27.973 [job3] 00:11:27.973 filename=/dev/nvme0n4 00:11:27.973 Could not set queue depth (nvme0n1) 00:11:27.973 Could not set queue depth (nvme0n2) 00:11:27.973 Could not set queue depth (nvme0n3) 00:11:27.973 Could not set queue depth (nvme0n4) 00:11:27.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.973 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.973 fio-3.35 00:11:27.973 Starting 4 threads 00:11:29.350 00:11:29.350 job0: (groupid=0, jobs=1): err= 0: pid=1047953: Fri Dec 6 19:09:39 2024 00:11:29.350 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:11:29.350 slat (usec): min=2, max=17932, avg=103.00, stdev=637.61 00:11:29.350 clat (usec): min=4455, max=34646, avg=13437.08, stdev=4663.65 00:11:29.350 lat (usec): min=4889, max=34657, avg=13540.07, stdev=4682.23 00:11:29.350 clat percentiles (usec): 00:11:29.350 | 1.00th=[ 5473], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:11:29.350 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:11:29.350 | 70.00th=[13173], 80.00th=[14222], 90.00th=[17433], 95.00th=[24511], 00:11:29.350 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:11:29.350 | 99.99th=[34866] 00:11:29.350 write: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec); 0 zone resets 00:11:29.350 slat (usec): min=3, max=23538, avg=99.90, stdev=712.83 00:11:29.350 clat (usec): min=3096, max=56526, avg=13293.04, stdev=6179.20 00:11:29.350 lat (usec): min=3107, max=56545, avg=13392.94, stdev=6235.20 00:11:29.350 clat percentiles (usec): 00:11:29.350 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10421], 00:11:29.350 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:11:29.350 | 70.00th=[12911], 80.00th=[13304], 90.00th=[15401], 95.00th=[31065], 00:11:29.350 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[49546], 00:11:29.350 | 99.99th=[56361] 00:11:29.350 bw ( KiB/s): min=18882, max=18904, per=30.64%, avg=18893.00, stdev=15.56, samples=2 00:11:29.350 iops : min= 4720, max= 4726, avg=4723.00, stdev= 4.24, samples=2 00:11:29.350 lat (msec) : 4=0.06%, 10=9.75%, 20=82.26%, 50=7.90%, 100=0.02% 00:11:29.350 cpu : usr=4.38%, sys=7.07%, ctx=435, majf=0, minf=1 00:11:29.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:29.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.350 issued rwts: total=4608,4855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.350 job1: (groupid=0, jobs=1): err= 0: pid=1047954: Fri Dec 6 19:09:39 2024 00:11:29.350 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:11:29.350 slat (usec): min=2, max=15177, avg=172.41, stdev=1121.67 00:11:29.350 clat (usec): min=6337, max=56483, avg=20893.03, stdev=9915.04 00:11:29.350 lat (usec): min=6343, max=56517, avg=21065.45, stdev=10004.24 00:11:29.350 clat percentiles (usec): 00:11:29.350 | 1.00th=[ 6390], 5.00th=[11731], 10.00th=[12518], 20.00th=[12649], 00:11:29.350 | 30.00th=[15401], 40.00th=[17171], 50.00th=[17695], 60.00th=[19530], 00:11:29.350 | 70.00th=[22676], 80.00th=[26608], 90.00th=[38536], 95.00th=[44303], 00:11:29.350 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:11:29.350 | 99.99th=[56361] 00:11:29.350 write: IOPS=2894, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1011msec); 0 zone resets 00:11:29.350 slat (usec): min=4, max=24548, avg=178.16, stdev=908.53 00:11:29.350 clat (usec): min=1828, max=68232, avg=25545.70, stdev=14157.07 00:11:29.350 lat (usec): min=1847, max=68241, avg=25723.86, stdev=14238.23 00:11:29.350 clat percentiles (usec): 00:11:29.350 | 1.00th=[ 2638], 5.00th=[ 7898], 10.00th=[12649], 20.00th=[14091], 00:11:29.350 | 30.00th=[14615], 40.00th=[21627], 50.00th=[23725], 60.00th=[25822], 00:11:29.350 | 70.00th=[26608], 80.00th=[38011], 90.00th=[49021], 95.00th=[54264], 00:11:29.350 | 99.00th=[63177], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:11:29.350 | 99.99th=[68682] 00:11:29.350 bw ( KiB/s): min=10640, max=11744, per=18.15%, avg=11192.00, stdev=780.65, samples=2 00:11:29.350 iops : min= 2660, max= 2936, avg=2798.00, stdev=195.16, samples=2 00:11:29.351 lat (msec) : 2=0.35%, 4=0.95%, 10=3.79%, 20=44.42%, 50=44.80% 00:11:29.351 lat (msec) : 100=5.69% 00:11:29.351 cpu : usr=2.48%, sys=4.16%, ctx=299, majf=0, minf=1 00:11:29.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:29.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.351 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.351 job2: (groupid=0, jobs=1): err= 0: pid=1047955: Fri Dec 6 19:09:39 2024 00:11:29.351 read: IOPS=2261, BW=9045KiB/s (9262kB/s)(9108KiB/1007msec) 00:11:29.351 slat (usec): min=3, max=18316, avg=201.41, stdev=1133.47 00:11:29.351 clat (usec): min=2663, max=97920, avg=20708.02, stdev=8902.19 00:11:29.351 lat (usec): min=8007, max=97930, avg=20909.43, stdev=9077.09 00:11:29.351 clat percentiles (usec): 00:11:29.351 | 1.00th=[ 8455], 5.00th=[14091], 10.00th=[15270], 20.00th=[15795], 00:11:29.351 | 30.00th=[16712], 40.00th=[17171], 50.00th=[18220], 60.00th=[20317], 00:11:29.351 | 70.00th=[21103], 80.00th=[24511], 90.00th=[26346], 95.00th=[32113], 00:11:29.351 | 99.00th=[60556], 99.50th=[83362], 99.90th=[98042], 99.95th=[98042], 00:11:29.351 | 99.99th=[98042] 00:11:29.351 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:29.351 slat (usec): min=3, max=26616, avg=193.36, stdev=913.70 00:11:29.351 clat (usec): min=885, max=97941, avg=31489.65, stdev=16448.90 00:11:29.351 lat (usec): min=892, max=98986, avg=31683.01, stdev=16494.35 00:11:29.351 clat percentiles (usec): 00:11:29.351 | 1.00th=[ 2573], 5.00th=[10683], 10.00th=[15139], 20.00th=[23200], 00:11:29.351 | 30.00th=[23987], 40.00th=[25297], 50.00th=[26346], 60.00th=[28443], 00:11:29.351 | 70.00th=[32113], 80.00th=[43254], 90.00th=[52167], 95.00th=[63701], 00:11:29.351 | 99.00th=[90702], 99.50th=[92799], 99.90th=[93848], 99.95th=[98042], 00:11:29.351 | 99.99th=[98042] 00:11:29.351 bw ( KiB/s): min= 8208, max=12247, per=16.59%, avg=10227.50, stdev=2856.00, samples=2 00:11:29.351 iops : min= 2052, max= 3061, avg=2556.50, stdev=713.47, samples=2 00:11:29.351 lat (usec) : 1000=0.06% 00:11:29.351 lat (msec) : 2=0.33%, 4=0.35%, 10=2.69%, 20=30.66%, 50=58.94% 00:11:29.351 lat (msec) : 100=6.97% 00:11:29.351 cpu : usr=4.27%, sys=6.16%, ctx=347, majf=0, minf=1 00:11:29.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:29.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.351 issued rwts: total=2277,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.351 job3: (groupid=0, jobs=1): err= 0: pid=1047956: Fri Dec 6 19:09:39 2024 00:11:29.351 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:11:29.351 slat (usec): min=2, max=9093, avg=91.47, stdev=561.70 00:11:29.351 clat (usec): min=2798, max=21319, avg=12195.50, stdev=1868.98 00:11:29.351 lat (usec): min=2803, max=25109, avg=12286.98, stdev=1909.47 00:11:29.351 clat percentiles (usec): 00:11:29.351 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11338], 00:11:29.351 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:11:29.351 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14353], 95.00th=[14877], 00:11:29.351 | 99.00th=[17695], 99.50th=[19006], 99.90th=[19792], 99.95th=[20841], 00:11:29.351 | 99.99th=[21365] 00:11:29.351 write: IOPS=5221, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:11:29.351 slat (usec): min=4, max=9647, avg=87.77, stdev=559.95 00:11:29.351 clat (usec): min=466, max=36645, avg=12262.39, stdev=4551.37 00:11:29.351 lat (usec): min=540, max=36659, avg=12350.16, stdev=4579.21 00:11:29.351 clat percentiles (usec): 00:11:29.351 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 7504], 20.00th=[10159], 00:11:29.351 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11731], 60.00th=[12256], 00:11:29.351 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14877], 95.00th=[19530], 00:11:29.351 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:11:29.351 | 99.99th=[36439] 00:11:29.351 bw ( KiB/s): min=20439, max=20504, per=33.20%, avg=20471.50, stdev=45.96, samples=2 00:11:29.351 iops : min= 5109, max= 5126, avg=5117.50, stdev=12.02, samples=2 00:11:29.351 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.13% 00:11:29.351 lat (msec) : 4=0.15%, 10=12.86%, 20=84.61%, 50=2.15% 00:11:29.351 cpu : usr=5.68%, sys=8.97%, ctx=381, majf=0, minf=1 00:11:29.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:29.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.351 issued rwts: total=5120,5242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.351 00:11:29.351 Run status group 0 (all jobs): 00:11:29.351 READ: bw=56.3MiB/s (59.0MB/s), 9045KiB/s-19.9MiB/s (9262kB/s-20.9MB/s), io=56.9MiB (59.7MB), run=1004-1011msec 00:11:29.351 WRITE: bw=60.2MiB/s (63.1MB/s), 9.93MiB/s-20.4MiB/s (10.4MB/s-21.4MB/s), io=60.9MiB (63.8MB), run=1004-1011msec 00:11:29.351 00:11:29.351 Disk stats (read/write): 00:11:29.351 nvme0n1: ios=3797/4096, merge=0/0, ticks=21399/23167, in_queue=44566, util=98.50% 00:11:29.351 nvme0n2: ios=2091/2496, merge=0/0, ticks=26731/35968, in_queue=62699, util=96.75% 00:11:29.351 nvme0n3: ios=2048/2287, merge=0/0, ticks=22078/32607, in_queue=54685, util=88.83% 00:11:29.351 nvme0n4: ios=4241/4608, merge=0/0, ticks=26037/26003, in_queue=52040, util=95.47% 00:11:29.351 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:29.351 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1048089 00:11:29.351 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:29.351 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:29.351 [global] 00:11:29.351 thread=1 00:11:29.351 invalidate=1 00:11:29.351 rw=read 00:11:29.351 time_based=1 00:11:29.351 runtime=10 00:11:29.351 ioengine=libaio 00:11:29.351 direct=1 00:11:29.351 bs=4096 00:11:29.351 iodepth=1 00:11:29.351 norandommap=1 00:11:29.351 numjobs=1 00:11:29.351 00:11:29.351 [job0] 00:11:29.351 filename=/dev/nvme0n1 00:11:29.351 [job1] 00:11:29.351 filename=/dev/nvme0n2 00:11:29.351 [job2] 00:11:29.351 filename=/dev/nvme0n3 00:11:29.351 [job3] 00:11:29.351 filename=/dev/nvme0n4 00:11:29.351 Could not set queue depth (nvme0n1) 00:11:29.351 Could not set queue depth (nvme0n2) 00:11:29.351 Could not set queue depth (nvme0n3) 00:11:29.351 Could not set queue depth (nvme0n4) 00:11:29.351 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.351 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.351 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.351 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.351 fio-3.35 00:11:29.351 Starting 4 threads 00:11:32.632 19:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:32.632 19:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:32.632 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2875392, buflen=4096 00:11:32.632 fio: pid=1048185, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:32.890 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.890 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:32.890 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=733184, buflen=4096 00:11:32.890 fio: pid=1048184, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:33.148 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.148 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:33.148 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=33595392, buflen=4096 00:11:33.148 fio: pid=1048181, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:33.406 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7417856, buflen=4096 00:11:33.406 fio: pid=1048182, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:33.407 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.407 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:33.407 00:11:33.407 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1048181: Fri Dec 6 19:09:43 2024 00:11:33.407 read: IOPS=2319, BW=9276KiB/s (9498kB/s)(32.0MiB/3537msec) 00:11:33.407 slat (usec): min=5, max=13923, avg=14.36, stdev=187.27 00:11:33.407 clat (usec): min=177, max=41980, avg=411.34, stdev=2698.23 00:11:33.407 lat (usec): min=182, max=54975, avg=425.70, stdev=2730.73 00:11:33.407 clat percentiles (usec): 00:11:33.407 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:11:33.407 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:11:33.407 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 273], 00:11:33.407 | 99.00th=[ 412], 99.50th=[ 611], 99.90th=[41157], 99.95th=[41157], 00:11:33.407 | 99.99th=[42206] 00:11:33.407 bw ( KiB/s): min= 104, max=17080, per=32.60%, avg=9316.00, stdev=7979.46, samples=6 00:11:33.407 iops : min= 26, max= 4270, avg=2329.00, stdev=1994.87, samples=6 00:11:33.407 lat (usec) : 250=82.26%, 500=17.16%, 750=0.09%, 1000=0.01% 00:11:33.407 lat (msec) : 2=0.02%, 50=0.44% 00:11:33.407 cpu : usr=1.64%, sys=3.90%, ctx=8206, majf=0, minf=1 00:11:33.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 issued rwts: total=8203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.407 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1048182: Fri Dec 6 19:09:43 2024 00:11:33.407 read: IOPS=4765, BW=18.6MiB/s (19.5MB/s)(71.1MiB/3818msec) 00:11:33.407 slat (usec): min=4, max=28863, avg=12.49, stdev=267.03 00:11:33.407 clat (usec): min=154, max=11735, avg=193.94, stdev=128.07 00:11:33.407 lat (usec): min=158, max=29165, avg=206.43, stdev=319.29 00:11:33.407 clat percentiles (usec): 00:11:33.407 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:11:33.407 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:11:33.407 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:11:33.407 | 99.00th=[ 277], 99.50th=[ 334], 99.90th=[ 510], 99.95th=[ 570], 00:11:33.407 | 99.99th=[ 8160] 00:11:33.407 bw ( KiB/s): min=16805, max=20384, per=67.44%, avg=19272.71, stdev=1232.13, samples=7 00:11:33.407 iops : min= 4201, max= 5096, avg=4818.14, stdev=308.12, samples=7 00:11:33.407 lat (usec) : 250=98.50%, 500=1.37%, 750=0.07% 00:11:33.407 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.01% 00:11:33.407 cpu : usr=1.70%, sys=4.58%, ctx=18204, majf=0, minf=2 00:11:33.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 issued rwts: total=18196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.407 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1048184: Fri Dec 6 19:09:43 2024 00:11:33.407 read: IOPS=55, BW=222KiB/s (228kB/s)(716KiB/3221msec) 00:11:33.407 slat (nsec): min=6980, max=74903, avg=22458.57, stdev=10736.89 00:11:33.407 clat (usec): min=226, max=41951, avg=17829.75, stdev=20102.65 00:11:33.407 lat (usec): min=243, max=41987, avg=17852.26, stdev=20103.57 00:11:33.407 clat percentiles (usec): 00:11:33.407 | 1.00th=[ 231], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 330], 00:11:33.407 | 30.00th=[ 359], 40.00th=[ 396], 50.00th=[ 437], 60.00th=[40633], 00:11:33.407 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:33.407 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:33.407 | 99.99th=[42206] 00:11:33.407 bw ( KiB/s): min= 96, max= 872, per=0.80%, avg=230.67, stdev=314.45, samples=6 00:11:33.407 iops : min= 24, max= 218, avg=57.67, stdev=78.61, samples=6 00:11:33.407 lat (usec) : 250=3.89%, 500=52.22% 00:11:33.407 lat (msec) : 10=0.56%, 50=42.78% 00:11:33.407 cpu : usr=0.16%, sys=0.03%, ctx=181, majf=0, minf=1 00:11:33.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 issued rwts: total=180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.407 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1048185: Fri Dec 6 19:09:43 2024 00:11:33.407 read: IOPS=238, BW=953KiB/s (976kB/s)(2808KiB/2946msec) 00:11:33.407 slat (nsec): min=7692, max=68499, avg=17354.36, stdev=8023.24 00:11:33.407 clat (usec): min=221, max=41071, avg=4141.38, stdev=11865.45 00:11:33.407 lat (usec): min=238, max=41093, avg=4158.73, stdev=11867.56 00:11:33.407 clat percentiles (usec): 00:11:33.407 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:11:33.407 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:11:33.407 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 519], 95.00th=[41157], 00:11:33.407 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:33.407 | 99.99th=[41157] 00:11:33.407 bw ( KiB/s): min= 104, max= 5016, per=3.84%, avg=1096.00, stdev=2191.45, samples=5 00:11:33.407 iops : min= 26, max= 1254, avg=274.00, stdev=547.86, samples=5 00:11:33.407 lat (usec) : 250=0.28%, 500=89.47%, 750=0.28% 00:11:33.407 lat (msec) : 2=0.28%, 10=0.14%, 50=9.39% 00:11:33.407 cpu : usr=0.31%, sys=0.54%, ctx=704, majf=0, minf=2 00:11:33.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.407 issued rwts: total=703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.407 00:11:33.407 Run status group 0 (all jobs): 00:11:33.407 READ: bw=27.9MiB/s (29.3MB/s), 222KiB/s-18.6MiB/s (228kB/s-19.5MB/s), io=107MiB (112MB), run=2946-3818msec 00:11:33.407 00:11:33.407 Disk stats (read/write): 00:11:33.407 nvme0n1: ios=8198/0, merge=0/0, ticks=3116/0, in_queue=3116, util=95.28% 00:11:33.407 nvme0n2: ios=17288/0, merge=0/0, ticks=3250/0, in_queue=3250, util=95.07% 00:11:33.407 nvme0n3: ios=225/0, merge=0/0, ticks=3421/0, in_queue=3421, util=100.00% 00:11:33.407 nvme0n4: ios=754/0, merge=0/0, ticks=3695/0, in_queue=3695, util=99.86% 00:11:33.667 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.667 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:33.927 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.927 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:34.186 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.186 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1048089 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:34.751 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:35.008 nvmf hotplug test: fio failed as expected 00:11:35.008 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.265 rmmod nvme_tcp 00:11:35.265 rmmod nvme_fabrics 00:11:35.265 rmmod nvme_keyring 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1046056 ']' 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1046056 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1046056 ']' 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1046056 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.265 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046056 00:11:35.524 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.524 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.524 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046056' 00:11:35.524 killing process with pid 1046056 00:11:35.524 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1046056 00:11:35.524 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1046056 00:11:35.524 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.524 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.524 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.525 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.065 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.065 00:11:38.065 real 0m24.414s 00:11:38.065 user 1m25.002s 00:11:38.065 sys 0m7.138s 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.066 ************************************ 00:11:38.066 END TEST nvmf_fio_target 00:11:38.066 ************************************ 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.066 ************************************ 00:11:38.066 START TEST nvmf_bdevio 00:11:38.066 ************************************ 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:38.066 * Looking for test storage... 00:11:38.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.066 --rc genhtml_branch_coverage=1 00:11:38.066 --rc genhtml_function_coverage=1 00:11:38.066 --rc genhtml_legend=1 00:11:38.066 --rc geninfo_all_blocks=1 00:11:38.066 --rc geninfo_unexecuted_blocks=1 00:11:38.066 00:11:38.066 ' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.066 --rc genhtml_branch_coverage=1 00:11:38.066 --rc genhtml_function_coverage=1 00:11:38.066 --rc genhtml_legend=1 00:11:38.066 --rc geninfo_all_blocks=1 00:11:38.066 --rc geninfo_unexecuted_blocks=1 00:11:38.066 00:11:38.066 ' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.066 --rc genhtml_branch_coverage=1 00:11:38.066 --rc genhtml_function_coverage=1 00:11:38.066 --rc genhtml_legend=1 00:11:38.066 --rc geninfo_all_blocks=1 00:11:38.066 --rc geninfo_unexecuted_blocks=1 00:11:38.066 00:11:38.066 ' 00:11:38.066 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.066 --rc genhtml_branch_coverage=1 00:11:38.085 --rc genhtml_function_coverage=1 00:11:38.085 --rc genhtml_legend=1 00:11:38.085 --rc geninfo_all_blocks=1 00:11:38.085 --rc geninfo_unexecuted_blocks=1 00:11:38.085 00:11:38.085 ' 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.085 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.086 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:39.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:39.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:39.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:39.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.990 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:40.248 00:11:40.248 --- 10.0.0.2 ping statistics --- 00:11:40.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.248 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:40.248 00:11:40.248 --- 10.0.0.1 ping statistics --- 00:11:40.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.248 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.248 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1050943 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1050943 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1050943 ']' 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.249 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.249 [2024-12-06 19:09:50.752031] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:40.249 [2024-12-06 19:09:50.752109] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.249 [2024-12-06 19:09:50.824042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.507 [2024-12-06 19:09:50.884838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.507 [2024-12-06 19:09:50.884901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.507 [2024-12-06 19:09:50.884914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.507 [2024-12-06 19:09:50.884926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.507 [2024-12-06 19:09:50.884935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.507 [2024-12-06 19:09:50.886726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.507 [2024-12-06 19:09:50.886750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:40.507 [2024-12-06 19:09:50.886802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:40.507 [2024-12-06 19:09:50.886805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 [2024-12-06 19:09:51.046381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.789 Malloc0 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.789 [2024-12-06 19:09:51.114718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:40.789 { 00:11:40.789 "params": { 00:11:40.789 "name": "Nvme$subsystem", 00:11:40.789 "trtype": "$TEST_TRANSPORT", 00:11:40.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.789 "adrfam": "ipv4", 00:11:40.789 "trsvcid": "$NVMF_PORT", 00:11:40.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.789 "hdgst": ${hdgst:-false}, 00:11:40.789 "ddgst": ${ddgst:-false} 00:11:40.789 }, 00:11:40.789 "method": "bdev_nvme_attach_controller" 00:11:40.789 } 00:11:40.789 EOF 00:11:40.789 )") 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:40.789 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:40.789 "params": { 00:11:40.789 "name": "Nvme1", 00:11:40.789 "trtype": "tcp", 00:11:40.789 "traddr": "10.0.0.2", 00:11:40.789 "adrfam": "ipv4", 00:11:40.789 "trsvcid": "4420", 00:11:40.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.789 "hdgst": false, 00:11:40.789 "ddgst": false 00:11:40.789 }, 00:11:40.789 "method": "bdev_nvme_attach_controller" 00:11:40.789 }' 00:11:40.789 [2024-12-06 19:09:51.165284] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:40.789 [2024-12-06 19:09:51.165351] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050971 ] 00:11:40.789 [2024-12-06 19:09:51.233347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.789 [2024-12-06 19:09:51.296132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.789 [2024-12-06 19:09:51.296182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.790 [2024-12-06 19:09:51.296186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.077 I/O targets: 00:11:41.077 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:41.077 00:11:41.077 00:11:41.077 CUnit - A unit testing framework for C - Version 2.1-3 00:11:41.077 http://cunit.sourceforge.net/ 00:11:41.077 00:11:41.077 00:11:41.077 Suite: bdevio tests on: Nvme1n1 00:11:41.336 Test: blockdev write read block ...passed 00:11:41.336 Test: blockdev write zeroes read block ...passed 00:11:41.336 Test: blockdev write zeroes read no split ...passed 00:11:41.336 Test: blockdev write zeroes read split ...passed 00:11:41.336 Test: blockdev write zeroes read split partial ...passed 00:11:41.336 Test: blockdev reset ...[2024-12-06 19:09:51.751194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:41.336 [2024-12-06 19:09:51.751318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229c8c0 (9): Bad file descriptor 00:11:41.336 [2024-12-06 19:09:51.806729] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:41.336 passed 00:11:41.336 Test: blockdev write read 8 blocks ...passed 00:11:41.336 Test: blockdev write read size > 128k ...passed 00:11:41.336 Test: blockdev write read invalid size ...passed 00:11:41.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.336 Test: blockdev write read max offset ...passed 00:11:41.594 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.594 Test: blockdev writev readv 8 blocks ...passed 00:11:41.594 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.594 Test: blockdev writev readv block ...passed 00:11:41.594 Test: blockdev writev readv size > 128k ...passed 00:11:41.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.594 Test: blockdev comparev and writev ...[2024-12-06 19:09:52.018812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.594 [2024-12-06 19:09:52.018850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:41.594 [2024-12-06 19:09:52.018875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.594 [2024-12-06 19:09:52.018892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:41.594 [2024-12-06 19:09:52.019206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.594 [2024-12-06 19:09:52.019242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:41.594 [2024-12-06 19:09:52.019265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.594 [2024-12-06 19:09:52.019281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:41.594 [2024-12-06 19:09:52.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.594 [2024-12-06 19:09:52.019599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:41.594 [2024-12-06 19:09:52.019620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.595 [2024-12-06 19:09:52.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:41.595 [2024-12-06 19:09:52.019982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.595 [2024-12-06 19:09:52.020006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:41.595 [2024-12-06 19:09:52.020027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.595 [2024-12-06 19:09:52.020043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:41.595 passed 00:11:41.595 Test: blockdev nvme passthru rw ...passed 00:11:41.595 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:09:52.102928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.595 [2024-12-06 19:09:52.102956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:41.595 [2024-12-06 19:09:52.103089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.595 [2024-12-06 19:09:52.103112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:41.595 [2024-12-06 19:09:52.103245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.595 [2024-12-06 19:09:52.103268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:41.595 [2024-12-06 19:09:52.103404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.595 [2024-12-06 19:09:52.103427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:41.595 passed 00:11:41.595 Test: blockdev nvme admin passthru ...passed 00:11:41.595 Test: blockdev copy ...passed 00:11:41.595 00:11:41.595 Run Summary: Type Total Ran Passed Failed Inactive 00:11:41.595 suites 1 1 n/a 0 0 00:11:41.595 tests 23 23 23 0 0 00:11:41.595 asserts 152 152 152 0 n/a 00:11:41.595 00:11:41.595 Elapsed time = 1.053 seconds 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.853 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.853 rmmod nvme_tcp 00:11:41.853 rmmod nvme_fabrics 00:11:41.853 rmmod nvme_keyring 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1050943 ']' 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1050943 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1050943 ']' 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1050943 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050943 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050943' 00:11:42.111 killing process with pid 1050943 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1050943 00:11:42.111 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1050943 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.371 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.278 00:11:44.278 real 0m6.584s 00:11:44.278 user 0m10.575s 00:11:44.278 sys 0m2.251s 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.278 ************************************ 00:11:44.278 END TEST nvmf_bdevio 00:11:44.278 ************************************ 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:44.278 00:11:44.278 real 3m58.159s 00:11:44.278 user 10m19.069s 00:11:44.278 sys 1m8.855s 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.278 ************************************ 00:11:44.278 END TEST nvmf_target_core 00:11:44.278 ************************************ 00:11:44.278 19:09:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:44.278 19:09:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.278 19:09:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.278 19:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.278 ************************************ 00:11:44.278 START TEST nvmf_target_extra 00:11:44.278 ************************************ 00:11:44.278 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:44.538 * Looking for test storage... 00:11:44.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.538 --rc genhtml_branch_coverage=1 00:11:44.538 --rc genhtml_function_coverage=1 00:11:44.538 --rc genhtml_legend=1 00:11:44.538 --rc geninfo_all_blocks=1 00:11:44.538 --rc geninfo_unexecuted_blocks=1 00:11:44.538 00:11:44.538 ' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.538 --rc genhtml_branch_coverage=1 00:11:44.538 --rc genhtml_function_coverage=1 00:11:44.538 --rc genhtml_legend=1 00:11:44.538 --rc geninfo_all_blocks=1 00:11:44.538 --rc geninfo_unexecuted_blocks=1 00:11:44.538 00:11:44.538 ' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.538 --rc genhtml_branch_coverage=1 00:11:44.538 --rc genhtml_function_coverage=1 00:11:44.538 --rc genhtml_legend=1 00:11:44.538 --rc geninfo_all_blocks=1 00:11:44.538 --rc geninfo_unexecuted_blocks=1 00:11:44.538 00:11:44.538 ' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.538 --rc genhtml_branch_coverage=1 00:11:44.538 --rc genhtml_function_coverage=1 00:11:44.538 --rc genhtml_legend=1 00:11:44.538 --rc geninfo_all_blocks=1 00:11:44.538 --rc geninfo_unexecuted_blocks=1 00:11:44.538 00:11:44.538 ' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.538 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.539 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.539 ************************************ 00:11:44.539 START TEST nvmf_example 00:11:44.539 ************************************ 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:44.539 * Looking for test storage... 00:11:44.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.539 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.798 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.798 --rc genhtml_branch_coverage=1 00:11:44.798 --rc genhtml_function_coverage=1 00:11:44.799 --rc genhtml_legend=1 00:11:44.799 --rc geninfo_all_blocks=1 00:11:44.799 --rc geninfo_unexecuted_blocks=1 00:11:44.799 00:11:44.799 ' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.799 --rc genhtml_branch_coverage=1 00:11:44.799 --rc genhtml_function_coverage=1 00:11:44.799 --rc genhtml_legend=1 00:11:44.799 --rc geninfo_all_blocks=1 00:11:44.799 --rc geninfo_unexecuted_blocks=1 00:11:44.799 00:11:44.799 ' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.799 --rc genhtml_branch_coverage=1 00:11:44.799 --rc genhtml_function_coverage=1 00:11:44.799 --rc genhtml_legend=1 00:11:44.799 --rc geninfo_all_blocks=1 00:11:44.799 --rc geninfo_unexecuted_blocks=1 00:11:44.799 00:11:44.799 ' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.799 --rc genhtml_branch_coverage=1 00:11:44.799 --rc genhtml_function_coverage=1 00:11:44.799 --rc genhtml_legend=1 00:11:44.799 --rc geninfo_all_blocks=1 00:11:44.799 --rc geninfo_unexecuted_blocks=1 00:11:44.799 00:11:44.799 ' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.799 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.336 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:11:47.337 00:11:47.337 --- 10.0.0.2 ping statistics --- 00:11:47.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.337 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:11:47.337 00:11:47.337 --- 10.0.0.1 ping statistics --- 00:11:47.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.337 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1053236 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1053236 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1053236 ']' 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.337 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:48.271 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:00.492 Initializing NVMe Controllers 00:12:00.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:00.492 Initialization complete. Launching workers. 00:12:00.492 ======================================================== 00:12:00.492 Latency(us) 00:12:00.492 Device Information : IOPS MiB/s Average min max 00:12:00.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14457.79 56.48 4427.48 885.77 16011.22 00:12:00.492 ======================================================== 00:12:00.492 Total : 14457.79 56.48 4427.48 885.77 16011.22 00:12:00.492 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.492 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.492 rmmod nvme_tcp 00:12:00.492 rmmod nvme_fabrics 00:12:00.492 rmmod nvme_keyring 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1053236 ']' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1053236 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1053236 ']' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1053236 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1053236 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1053236' 00:12:00.492 killing process with pid 1053236 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1053236 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1053236 00:12:00.492 nvmf threads initialize successfully 00:12:00.492 bdev subsystem init successfully 00:12:00.492 created a nvmf target service 00:12:00.492 create targets's poll groups done 00:12:00.492 all subsystems of target started 00:12:00.492 nvmf target is running 00:12:00.492 all subsystems of target stopped 00:12:00.492 destroy targets's poll groups done 00:12:00.492 destroyed the nvmf target service 00:12:00.492 bdev subsystem finish successfully 00:12:00.492 nvmf threads destroy successfully 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.492 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.065 00:12:01.065 real 0m16.355s 00:12:01.065 user 0m45.124s 00:12:01.065 sys 0m3.817s 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.065 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.065 ************************************ 00:12:01.065 END TEST nvmf_example 00:12:01.065 ************************************ 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.066 ************************************ 00:12:01.066 START TEST nvmf_filesystem 00:12:01.066 ************************************ 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.066 * Looking for test storage... 00:12:01.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.066 --rc genhtml_branch_coverage=1 00:12:01.066 --rc genhtml_function_coverage=1 00:12:01.066 --rc genhtml_legend=1 00:12:01.066 --rc geninfo_all_blocks=1 00:12:01.066 --rc geninfo_unexecuted_blocks=1 00:12:01.066 00:12:01.066 ' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.066 --rc genhtml_branch_coverage=1 00:12:01.066 --rc genhtml_function_coverage=1 00:12:01.066 --rc genhtml_legend=1 00:12:01.066 --rc geninfo_all_blocks=1 00:12:01.066 --rc geninfo_unexecuted_blocks=1 00:12:01.066 00:12:01.066 ' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.066 --rc genhtml_branch_coverage=1 00:12:01.066 --rc genhtml_function_coverage=1 00:12:01.066 --rc genhtml_legend=1 00:12:01.066 --rc geninfo_all_blocks=1 00:12:01.066 --rc geninfo_unexecuted_blocks=1 00:12:01.066 00:12:01.066 ' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.066 --rc genhtml_branch_coverage=1 00:12:01.066 --rc genhtml_function_coverage=1 00:12:01.066 --rc genhtml_legend=1 00:12:01.066 --rc geninfo_all_blocks=1 00:12:01.066 --rc geninfo_unexecuted_blocks=1 00:12:01.066 00:12:01.066 ' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:01.066 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:01.067 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:01.067 #define SPDK_CONFIG_H 00:12:01.067 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:01.067 #define SPDK_CONFIG_APPS 1 00:12:01.067 #define SPDK_CONFIG_ARCH native 00:12:01.067 #undef SPDK_CONFIG_ASAN 00:12:01.067 #undef SPDK_CONFIG_AVAHI 00:12:01.067 #undef SPDK_CONFIG_CET 00:12:01.067 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:01.067 #define SPDK_CONFIG_COVERAGE 1 00:12:01.067 #define SPDK_CONFIG_CROSS_PREFIX 00:12:01.067 #undef SPDK_CONFIG_CRYPTO 00:12:01.067 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:01.067 #undef SPDK_CONFIG_CUSTOMOCF 00:12:01.067 #undef SPDK_CONFIG_DAOS 00:12:01.067 #define SPDK_CONFIG_DAOS_DIR 00:12:01.067 #define SPDK_CONFIG_DEBUG 1 00:12:01.067 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:01.067 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:01.067 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:01.067 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:01.067 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:01.067 #undef SPDK_CONFIG_DPDK_UADK 00:12:01.067 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.067 #define SPDK_CONFIG_EXAMPLES 1 00:12:01.067 #undef SPDK_CONFIG_FC 00:12:01.067 #define SPDK_CONFIG_FC_PATH 00:12:01.067 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:01.067 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:01.067 #define SPDK_CONFIG_FSDEV 1 00:12:01.067 #undef SPDK_CONFIG_FUSE 00:12:01.067 #undef SPDK_CONFIG_FUZZER 00:12:01.067 #define SPDK_CONFIG_FUZZER_LIB 00:12:01.067 #undef SPDK_CONFIG_GOLANG 00:12:01.067 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:01.067 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:01.067 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:01.067 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:01.067 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:01.067 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:01.067 #undef SPDK_CONFIG_HAVE_LZ4 00:12:01.067 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:01.067 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:01.067 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:01.067 #define SPDK_CONFIG_IDXD 1 00:12:01.067 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:01.067 #undef SPDK_CONFIG_IPSEC_MB 00:12:01.067 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:01.067 #define SPDK_CONFIG_ISAL 1 00:12:01.067 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:01.067 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:01.067 #define SPDK_CONFIG_LIBDIR 00:12:01.067 #undef SPDK_CONFIG_LTO 00:12:01.067 #define SPDK_CONFIG_MAX_LCORES 128 00:12:01.067 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:01.067 #define SPDK_CONFIG_NVME_CUSE 1 00:12:01.067 #undef SPDK_CONFIG_OCF 00:12:01.067 #define SPDK_CONFIG_OCF_PATH 00:12:01.067 #define SPDK_CONFIG_OPENSSL_PATH 00:12:01.067 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:01.067 #define SPDK_CONFIG_PGO_DIR 00:12:01.067 #undef SPDK_CONFIG_PGO_USE 00:12:01.067 #define SPDK_CONFIG_PREFIX /usr/local 00:12:01.067 #undef SPDK_CONFIG_RAID5F 00:12:01.067 #undef SPDK_CONFIG_RBD 00:12:01.067 #define SPDK_CONFIG_RDMA 1 00:12:01.067 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:01.067 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:01.067 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:01.067 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:01.067 #define SPDK_CONFIG_SHARED 1 00:12:01.067 #undef SPDK_CONFIG_SMA 00:12:01.067 #define SPDK_CONFIG_TESTS 1 00:12:01.067 #undef SPDK_CONFIG_TSAN 00:12:01.067 #define SPDK_CONFIG_UBLK 1 00:12:01.067 #define SPDK_CONFIG_UBSAN 1 00:12:01.068 #undef SPDK_CONFIG_UNIT_TESTS 00:12:01.068 #undef SPDK_CONFIG_URING 00:12:01.068 #define SPDK_CONFIG_URING_PATH 00:12:01.068 #undef SPDK_CONFIG_URING_ZNS 00:12:01.068 #undef SPDK_CONFIG_USDT 00:12:01.068 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:01.068 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:01.068 #define SPDK_CONFIG_VFIO_USER 1 00:12:01.068 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:01.068 #define SPDK_CONFIG_VHOST 1 00:12:01.068 #define SPDK_CONFIG_VIRTIO 1 00:12:01.068 #undef SPDK_CONFIG_VTUNE 00:12:01.068 #define SPDK_CONFIG_VTUNE_DIR 00:12:01.068 #define SPDK_CONFIG_WERROR 1 00:12:01.068 #define SPDK_CONFIG_WPDK_DIR 00:12:01.068 #undef SPDK_CONFIG_XNVME 00:12:01.068 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:01.068 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:01.069 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:01.332 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:01.333 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1054939 ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1054939 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZodByi 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZodByi/tests/target /tmp/spdk.ZodByi 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55957876736 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988507648 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6030630912 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984220672 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994251776 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30994030592 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=225280 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:01.334 * Looking for test storage... 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55957876736 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8245223424 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:01.334 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.335 --rc genhtml_branch_coverage=1 00:12:01.335 --rc genhtml_function_coverage=1 00:12:01.335 --rc genhtml_legend=1 00:12:01.335 --rc geninfo_all_blocks=1 00:12:01.335 --rc geninfo_unexecuted_blocks=1 00:12:01.335 00:12:01.335 ' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.335 --rc genhtml_branch_coverage=1 00:12:01.335 --rc genhtml_function_coverage=1 00:12:01.335 --rc genhtml_legend=1 00:12:01.335 --rc geninfo_all_blocks=1 00:12:01.335 --rc geninfo_unexecuted_blocks=1 00:12:01.335 00:12:01.335 ' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.335 --rc genhtml_branch_coverage=1 00:12:01.335 --rc genhtml_function_coverage=1 00:12:01.335 --rc genhtml_legend=1 00:12:01.335 --rc geninfo_all_blocks=1 00:12:01.335 --rc geninfo_unexecuted_blocks=1 00:12:01.335 00:12:01.335 ' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.335 --rc genhtml_branch_coverage=1 00:12:01.335 --rc genhtml_function_coverage=1 00:12:01.335 --rc genhtml_legend=1 00:12:01.335 --rc geninfo_all_blocks=1 00:12:01.335 --rc geninfo_unexecuted_blocks=1 00:12:01.335 00:12:01.335 ' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.335 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.336 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:03.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:03.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:03.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:03.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.874 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:12:03.875 00:12:03.875 --- 10.0.0.2 ping statistics --- 00:12:03.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.875 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:03.875 00:12:03.875 --- 10.0.0.1 ping statistics --- 00:12:03.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.875 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:03.875 ************************************ 00:12:03.875 START TEST nvmf_filesystem_no_in_capsule 00:12:03.875 ************************************ 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1056588 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1056588 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1056588 ']' 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.875 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.875 [2024-12-06 19:10:14.252938] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:03.875 [2024-12-06 19:10:14.253056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.875 [2024-12-06 19:10:14.325513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.875 [2024-12-06 19:10:14.387999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.875 [2024-12-06 19:10:14.388049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.875 [2024-12-06 19:10:14.388078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.875 [2024-12-06 19:10:14.388090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.875 [2024-12-06 19:10:14.388100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.875 [2024-12-06 19:10:14.389560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.875 [2024-12-06 19:10:14.389619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.875 [2024-12-06 19:10:14.389693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.875 [2024-12-06 19:10:14.389697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.133 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.133 [2024-12-06 19:10:14.532422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.134 Malloc1 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.134 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.391 [2024-12-06 19:10:14.720354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:04.391 { 00:12:04.391 "name": "Malloc1", 00:12:04.391 "aliases": [ 00:12:04.391 "7089120d-3cab-4eb0-8099-3e4532b79f40" 00:12:04.391 ], 00:12:04.391 "product_name": "Malloc disk", 00:12:04.391 "block_size": 512, 00:12:04.391 "num_blocks": 1048576, 00:12:04.391 "uuid": "7089120d-3cab-4eb0-8099-3e4532b79f40", 00:12:04.391 "assigned_rate_limits": { 00:12:04.391 "rw_ios_per_sec": 0, 00:12:04.391 "rw_mbytes_per_sec": 0, 00:12:04.391 "r_mbytes_per_sec": 0, 00:12:04.391 "w_mbytes_per_sec": 0 00:12:04.391 }, 00:12:04.391 "claimed": true, 00:12:04.391 "claim_type": "exclusive_write", 00:12:04.391 "zoned": false, 00:12:04.391 "supported_io_types": { 00:12:04.391 "read": true, 00:12:04.391 "write": true, 00:12:04.391 "unmap": true, 00:12:04.391 "flush": true, 00:12:04.391 "reset": true, 00:12:04.391 "nvme_admin": false, 00:12:04.391 "nvme_io": false, 00:12:04.391 "nvme_io_md": false, 00:12:04.391 "write_zeroes": true, 00:12:04.391 "zcopy": true, 00:12:04.391 "get_zone_info": false, 00:12:04.391 "zone_management": false, 00:12:04.391 "zone_append": false, 00:12:04.391 "compare": false, 00:12:04.391 "compare_and_write": false, 00:12:04.391 "abort": true, 00:12:04.391 "seek_hole": false, 00:12:04.391 "seek_data": false, 00:12:04.391 "copy": true, 00:12:04.391 "nvme_iov_md": false 00:12:04.391 }, 00:12:04.391 "memory_domains": [ 00:12:04.391 { 00:12:04.391 "dma_device_id": "system", 00:12:04.391 "dma_device_type": 1 00:12:04.391 }, 00:12:04.391 { 00:12:04.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.391 "dma_device_type": 2 00:12:04.391 } 00:12:04.391 ], 00:12:04.391 "driver_specific": {} 00:12:04.391 } 00:12:04.391 ]' 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:04.391 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:04.392 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:04.392 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:04.392 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.956 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.956 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.956 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.956 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:04.956 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:07.475 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:07.476 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:07.733 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.109 ************************************ 00:12:09.109 START TEST filesystem_ext4 00:12:09.109 ************************************ 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:09.109 mke2fs 1.47.0 (5-Feb-2023) 00:12:09.109 Discarding device blocks: 0/522240 done 00:12:09.109 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:09.109 Filesystem UUID: fbf09000-e8e9-49a5-a796-c1c7f40ec299 00:12:09.109 Superblock backups stored on blocks: 00:12:09.109 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:09.109 00:12:09.109 Allocating group tables: 0/64 done 00:12:09.109 Writing inode tables: 0/64 done 00:12:09.109 Creating journal (8192 blocks): done 00:12:09.109 Writing superblocks and filesystem accounting information: 0/64 done 00:12:09.109 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:09.109 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.371 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.371 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:14.371 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.371 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:14.371 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1056588 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.372 00:12:14.372 real 0m5.570s 00:12:14.372 user 0m0.017s 00:12:14.372 sys 0m0.053s 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:14.372 ************************************ 00:12:14.372 END TEST filesystem_ext4 00:12:14.372 ************************************ 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.372 ************************************ 00:12:14.372 START TEST filesystem_btrfs 00:12:14.372 ************************************ 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:14.372 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:14.937 btrfs-progs v6.8.1 00:12:14.937 See https://btrfs.readthedocs.io for more information. 00:12:14.937 00:12:14.937 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:14.937 NOTE: several default settings have changed in version 5.15, please make sure 00:12:14.937 this does not affect your deployments: 00:12:14.937 - DUP for metadata (-m dup) 00:12:14.937 - enabled no-holes (-O no-holes) 00:12:14.937 - enabled free-space-tree (-R free-space-tree) 00:12:14.937 00:12:14.937 Label: (null) 00:12:14.937 UUID: 5cc375d2-7aa2-4907-894e-258f25699098 00:12:14.937 Node size: 16384 00:12:14.937 Sector size: 4096 (CPU page size: 4096) 00:12:14.937 Filesystem size: 510.00MiB 00:12:14.937 Block group profiles: 00:12:14.937 Data: single 8.00MiB 00:12:14.937 Metadata: DUP 32.00MiB 00:12:14.937 System: DUP 8.00MiB 00:12:14.937 SSD detected: yes 00:12:14.937 Zoned device: no 00:12:14.937 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:14.937 Checksum: crc32c 00:12:14.937 Number of devices: 1 00:12:14.937 Devices: 00:12:14.937 ID SIZE PATH 00:12:14.937 1 510.00MiB /dev/nvme0n1p1 00:12:14.937 00:12:14.937 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:14.937 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:15.502 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1056588 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.502 00:12:15.502 real 0m1.097s 00:12:15.502 user 0m0.016s 00:12:15.502 sys 0m0.096s 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.502 ************************************ 00:12:15.502 END TEST filesystem_btrfs 00:12:15.502 ************************************ 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.502 ************************************ 00:12:15.502 START TEST filesystem_xfs 00:12:15.502 ************************************ 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:15.502 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:15.760 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:15.760 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:15.760 = sectsz=512 attr=2, projid32bit=1 00:12:15.760 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:15.760 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:15.760 data = bsize=4096 blocks=130560, imaxpct=25 00:12:15.761 = sunit=0 swidth=0 blks 00:12:15.761 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:15.761 log =internal log bsize=4096 blocks=16384, version=2 00:12:15.761 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:15.761 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:16.695 Discarding blocks...Done. 00:12:16.695 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:16.695 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1056588 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.223 00:12:19.223 real 0m3.560s 00:12:19.223 user 0m0.021s 00:12:19.223 sys 0m0.060s 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:19.223 ************************************ 00:12:19.223 END TEST filesystem_xfs 00:12:19.223 ************************************ 00:12:19.223 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:19.481 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1056588 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1056588 ']' 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1056588 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.481 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056588 00:12:19.739 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.739 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.739 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056588' 00:12:19.739 killing process with pid 1056588 00:12:19.739 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1056588 00:12:19.739 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1056588 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:19.997 00:12:19.997 real 0m16.300s 00:12:19.997 user 1m3.138s 00:12:19.997 sys 0m1.963s 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.997 ************************************ 00:12:19.997 END TEST nvmf_filesystem_no_in_capsule 00:12:19.997 ************************************ 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.997 ************************************ 00:12:19.997 START TEST nvmf_filesystem_in_capsule 00:12:19.997 ************************************ 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1058772 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1058772 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1058772 ']' 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.997 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.254 [2024-12-06 19:10:30.603733] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:20.254 [2024-12-06 19:10:30.603831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.254 [2024-12-06 19:10:30.676246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.254 [2024-12-06 19:10:30.731161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.254 [2024-12-06 19:10:30.731218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.254 [2024-12-06 19:10:30.731241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.254 [2024-12-06 19:10:30.731252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.254 [2024-12-06 19:10:30.731261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.254 [2024-12-06 19:10:30.732683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.254 [2024-12-06 19:10:30.732739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.254 [2024-12-06 19:10:30.732804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.254 [2024-12-06 19:10:30.732807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 [2024-12-06 19:10:30.876513] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 Malloc1 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 [2024-12-06 19:10:31.069422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.513 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:20.772 { 00:12:20.772 "name": "Malloc1", 00:12:20.772 "aliases": [ 00:12:20.772 "cc57c475-a1cc-432b-b9a6-ca6609cdbc44" 00:12:20.772 ], 00:12:20.772 "product_name": "Malloc disk", 00:12:20.772 "block_size": 512, 00:12:20.772 "num_blocks": 1048576, 00:12:20.772 "uuid": "cc57c475-a1cc-432b-b9a6-ca6609cdbc44", 00:12:20.772 "assigned_rate_limits": { 00:12:20.772 "rw_ios_per_sec": 0, 00:12:20.772 "rw_mbytes_per_sec": 0, 00:12:20.772 "r_mbytes_per_sec": 0, 00:12:20.772 "w_mbytes_per_sec": 0 00:12:20.772 }, 00:12:20.772 "claimed": true, 00:12:20.772 "claim_type": "exclusive_write", 00:12:20.772 "zoned": false, 00:12:20.772 "supported_io_types": { 00:12:20.772 "read": true, 00:12:20.772 "write": true, 00:12:20.772 "unmap": true, 00:12:20.772 "flush": true, 00:12:20.772 "reset": true, 00:12:20.772 "nvme_admin": false, 00:12:20.772 "nvme_io": false, 00:12:20.772 "nvme_io_md": false, 00:12:20.772 "write_zeroes": true, 00:12:20.772 "zcopy": true, 00:12:20.772 "get_zone_info": false, 00:12:20.772 "zone_management": false, 00:12:20.772 "zone_append": false, 00:12:20.772 "compare": false, 00:12:20.772 "compare_and_write": false, 00:12:20.772 "abort": true, 00:12:20.772 "seek_hole": false, 00:12:20.772 "seek_data": false, 00:12:20.772 "copy": true, 00:12:20.772 "nvme_iov_md": false 00:12:20.772 }, 00:12:20.772 "memory_domains": [ 00:12:20.772 { 00:12:20.772 "dma_device_id": "system", 00:12:20.772 "dma_device_type": 1 00:12:20.772 }, 00:12:20.772 { 00:12:20.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.772 "dma_device_type": 2 00:12:20.772 } 00:12:20.772 ], 00:12:20.772 "driver_specific": {} 00:12:20.772 } 00:12:20.772 ]' 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:20.772 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.336 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.336 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.336 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.336 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.336 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:23.229 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:23.794 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:24.359 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:25.729 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.730 ************************************ 00:12:25.730 START TEST filesystem_in_capsule_ext4 00:12:25.730 ************************************ 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:25.730 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:25.730 mke2fs 1.47.0 (5-Feb-2023) 00:12:25.730 Discarding device blocks: 0/522240 done 00:12:25.730 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:25.730 Filesystem UUID: 69773ac0-f293-444b-9926-325ae102b8f0 00:12:25.730 Superblock backups stored on blocks: 00:12:25.730 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:25.730 00:12:25.730 Allocating group tables: 0/64 done 00:12:25.730 Writing inode tables: 0/64 done 00:12:25.730 Creating journal (8192 blocks): done 00:12:26.729 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:12:26.729 00:12:26.729 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:26.729 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1058772 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.027 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.285 00:12:32.285 real 0m6.671s 00:12:32.285 user 0m0.025s 00:12:32.285 sys 0m0.065s 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.285 ************************************ 00:12:32.285 END TEST filesystem_in_capsule_ext4 00:12:32.285 ************************************ 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.285 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.286 ************************************ 00:12:32.286 START TEST filesystem_in_capsule_btrfs 00:12:32.286 ************************************ 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:32.286 btrfs-progs v6.8.1 00:12:32.286 See https://btrfs.readthedocs.io for more information. 00:12:32.286 00:12:32.286 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:32.286 NOTE: several default settings have changed in version 5.15, please make sure 00:12:32.286 this does not affect your deployments: 00:12:32.286 - DUP for metadata (-m dup) 00:12:32.286 - enabled no-holes (-O no-holes) 00:12:32.286 - enabled free-space-tree (-R free-space-tree) 00:12:32.286 00:12:32.286 Label: (null) 00:12:32.286 UUID: 9f319df4-5cb8-471e-97b0-7fc1f95d273e 00:12:32.286 Node size: 16384 00:12:32.286 Sector size: 4096 (CPU page size: 4096) 00:12:32.286 Filesystem size: 510.00MiB 00:12:32.286 Block group profiles: 00:12:32.286 Data: single 8.00MiB 00:12:32.286 Metadata: DUP 32.00MiB 00:12:32.286 System: DUP 8.00MiB 00:12:32.286 SSD detected: yes 00:12:32.286 Zoned device: no 00:12:32.286 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:32.286 Checksum: crc32c 00:12:32.286 Number of devices: 1 00:12:32.286 Devices: 00:12:32.286 ID SIZE PATH 00:12:32.286 1 510.00MiB /dev/nvme0n1p1 00:12:32.286 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:32.286 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1058772 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.221 00:12:33.221 real 0m0.911s 00:12:33.221 user 0m0.024s 00:12:33.221 sys 0m0.099s 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.221 ************************************ 00:12:33.221 END TEST filesystem_in_capsule_btrfs 00:12:33.221 ************************************ 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.221 ************************************ 00:12:33.221 START TEST filesystem_in_capsule_xfs 00:12:33.221 ************************************ 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.221 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.221 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.221 = sectsz=512 attr=2, projid32bit=1 00:12:33.221 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.221 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.221 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.221 = sunit=0 swidth=0 blks 00:12:33.221 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.221 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.221 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.221 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.156 Discarding blocks...Done. 00:12:34.156 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:34.157 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.679 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1058772 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.679 00:12:36.679 real 0m3.489s 00:12:36.679 user 0m0.013s 00:12:36.679 sys 0m0.061s 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:36.679 ************************************ 00:12:36.679 END TEST filesystem_in_capsule_xfs 00:12:36.679 ************************************ 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:36.679 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1058772 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1058772 ']' 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1058772 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1058772 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1058772' 00:12:36.937 killing process with pid 1058772 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1058772 00:12:36.937 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1058772 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.503 00:12:37.503 real 0m17.272s 00:12:37.503 user 1m6.979s 00:12:37.503 sys 0m2.080s 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.503 ************************************ 00:12:37.503 END TEST nvmf_filesystem_in_capsule 00:12:37.503 ************************************ 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.503 rmmod nvme_tcp 00:12:37.503 rmmod nvme_fabrics 00:12:37.503 rmmod nvme_keyring 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.503 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.408 00:12:39.408 real 0m38.510s 00:12:39.408 user 2m11.272s 00:12:39.408 sys 0m5.859s 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.408 ************************************ 00:12:39.408 END TEST nvmf_filesystem 00:12:39.408 ************************************ 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.408 19:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.667 ************************************ 00:12:39.667 START TEST nvmf_target_discovery 00:12:39.667 ************************************ 00:12:39.667 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.667 * Looking for test storage... 00:12:39.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.667 --rc genhtml_branch_coverage=1 00:12:39.667 --rc genhtml_function_coverage=1 00:12:39.667 --rc genhtml_legend=1 00:12:39.667 --rc geninfo_all_blocks=1 00:12:39.667 --rc geninfo_unexecuted_blocks=1 00:12:39.667 00:12:39.667 ' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.667 --rc genhtml_branch_coverage=1 00:12:39.667 --rc genhtml_function_coverage=1 00:12:39.667 --rc genhtml_legend=1 00:12:39.667 --rc geninfo_all_blocks=1 00:12:39.667 --rc geninfo_unexecuted_blocks=1 00:12:39.667 00:12:39.667 ' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.667 --rc genhtml_branch_coverage=1 00:12:39.667 --rc genhtml_function_coverage=1 00:12:39.667 --rc genhtml_legend=1 00:12:39.667 --rc geninfo_all_blocks=1 00:12:39.667 --rc geninfo_unexecuted_blocks=1 00:12:39.667 00:12:39.667 ' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.667 --rc genhtml_branch_coverage=1 00:12:39.667 --rc genhtml_function_coverage=1 00:12:39.667 --rc genhtml_legend=1 00:12:39.667 --rc geninfo_all_blocks=1 00:12:39.667 --rc geninfo_unexecuted_blocks=1 00:12:39.667 00:12:39.667 ' 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.667 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.668 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:42.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.220 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:42.221 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:42.221 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:42.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:12:42.221 00:12:42.221 --- 10.0.0.2 ping statistics --- 00:12:42.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.221 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:42.221 00:12:42.221 --- 10.0.0.1 ping statistics --- 00:12:42.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.221 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1062966 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1062966 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1062966 ']' 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.221 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.221 [2024-12-06 19:10:52.533099] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:42.221 [2024-12-06 19:10:52.533191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.221 [2024-12-06 19:10:52.605314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.221 [2024-12-06 19:10:52.662532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.221 [2024-12-06 19:10:52.662591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.221 [2024-12-06 19:10:52.662614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.221 [2024-12-06 19:10:52.662624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.221 [2024-12-06 19:10:52.662634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.221 [2024-12-06 19:10:52.664235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.221 [2024-12-06 19:10:52.664293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.221 [2024-12-06 19:10:52.664411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.221 [2024-12-06 19:10:52.664416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.222 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.222 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:42.222 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.222 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.222 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.480 [2024-12-06 19:10:52.813340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:42.480 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 Null1 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 [2024-12-06 19:10:52.861840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 Null2 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 Null3 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 Null4 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.481 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:42.740 00:12:42.740 Discovery Log Number of Records 6, Generation counter 6 00:12:42.740 =====Discovery Log Entry 0====== 00:12:42.740 trtype: tcp 00:12:42.740 adrfam: ipv4 00:12:42.740 subtype: current discovery subsystem 00:12:42.740 treq: not required 00:12:42.740 portid: 0 00:12:42.740 trsvcid: 4420 00:12:42.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:42.740 traddr: 10.0.0.2 00:12:42.740 eflags: explicit discovery connections, duplicate discovery information 00:12:42.740 sectype: none 00:12:42.740 =====Discovery Log Entry 1====== 00:12:42.740 trtype: tcp 00:12:42.740 adrfam: ipv4 00:12:42.740 subtype: nvme subsystem 00:12:42.740 treq: not required 00:12:42.740 portid: 0 00:12:42.740 trsvcid: 4420 00:12:42.740 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:42.740 traddr: 10.0.0.2 00:12:42.740 eflags: none 00:12:42.740 sectype: none 00:12:42.740 =====Discovery Log Entry 2====== 00:12:42.740 trtype: tcp 00:12:42.740 adrfam: ipv4 00:12:42.740 subtype: nvme subsystem 00:12:42.740 treq: not required 00:12:42.740 portid: 0 00:12:42.740 trsvcid: 4420 00:12:42.740 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:42.740 traddr: 10.0.0.2 00:12:42.740 eflags: none 00:12:42.740 sectype: none 00:12:42.740 =====Discovery Log Entry 3====== 00:12:42.740 trtype: tcp 00:12:42.740 adrfam: ipv4 00:12:42.740 subtype: nvme subsystem 00:12:42.740 treq: not required 00:12:42.740 portid: 0 00:12:42.740 trsvcid: 4420 00:12:42.740 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:42.740 traddr: 10.0.0.2 00:12:42.740 eflags: none 00:12:42.740 sectype: none 00:12:42.740 =====Discovery Log Entry 4====== 00:12:42.740 trtype: tcp 00:12:42.740 adrfam: ipv4 00:12:42.740 subtype: nvme subsystem 00:12:42.740 treq: not required 00:12:42.740 portid: 0 00:12:42.740 trsvcid: 4420 00:12:42.740 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:42.741 traddr: 10.0.0.2 00:12:42.741 eflags: none 00:12:42.741 sectype: none 00:12:42.741 =====Discovery Log Entry 5====== 00:12:42.741 trtype: tcp 00:12:42.741 adrfam: ipv4 00:12:42.741 subtype: discovery subsystem referral 00:12:42.741 treq: not required 00:12:42.741 portid: 0 00:12:42.741 trsvcid: 4430 00:12:42.741 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:42.741 traddr: 10.0.0.2 00:12:42.741 eflags: none 00:12:42.741 sectype: none 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:42.741 Perform nvmf subsystem discovery via RPC 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 [ 00:12:42.741 { 00:12:42.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.741 "subtype": "Discovery", 00:12:42.741 "listen_addresses": [ 00:12:42.741 { 00:12:42.741 "trtype": "TCP", 00:12:42.741 "adrfam": "IPv4", 00:12:42.741 "traddr": "10.0.0.2", 00:12:42.741 "trsvcid": "4420" 00:12:42.741 } 00:12:42.741 ], 00:12:42.741 "allow_any_host": true, 00:12:42.741 "hosts": [] 00:12:42.741 }, 00:12:42.741 { 00:12:42.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.741 "subtype": "NVMe", 00:12:42.741 "listen_addresses": [ 00:12:42.741 { 00:12:42.741 "trtype": "TCP", 00:12:42.741 "adrfam": "IPv4", 00:12:42.741 "traddr": "10.0.0.2", 00:12:42.741 "trsvcid": "4420" 00:12:42.741 } 00:12:42.741 ], 00:12:42.741 "allow_any_host": true, 00:12:42.741 "hosts": [], 00:12:42.741 "serial_number": "SPDK00000000000001", 00:12:42.741 "model_number": "SPDK bdev Controller", 00:12:42.741 "max_namespaces": 32, 00:12:42.741 "min_cntlid": 1, 00:12:42.741 "max_cntlid": 65519, 00:12:42.741 "namespaces": [ 00:12:42.741 { 00:12:42.741 "nsid": 1, 00:12:42.741 "bdev_name": "Null1", 00:12:42.741 "name": "Null1", 00:12:42.741 "nguid": "EEA5D2A4C311405A8EB23ACBE3AD445D", 00:12:42.741 "uuid": "eea5d2a4-c311-405a-8eb2-3acbe3ad445d" 00:12:42.741 } 00:12:42.741 ] 00:12:42.741 }, 00:12:42.741 { 00:12:42.741 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:42.741 "subtype": "NVMe", 00:12:42.741 "listen_addresses": [ 00:12:42.741 { 00:12:42.741 "trtype": "TCP", 00:12:42.741 "adrfam": "IPv4", 00:12:42.741 "traddr": "10.0.0.2", 00:12:42.741 "trsvcid": "4420" 00:12:42.741 } 00:12:42.741 ], 00:12:42.741 "allow_any_host": true, 00:12:42.741 "hosts": [], 00:12:42.741 "serial_number": "SPDK00000000000002", 00:12:42.741 "model_number": "SPDK bdev Controller", 00:12:42.741 "max_namespaces": 32, 00:12:42.741 "min_cntlid": 1, 00:12:42.741 "max_cntlid": 65519, 00:12:42.741 "namespaces": [ 00:12:42.741 { 00:12:42.741 "nsid": 1, 00:12:42.741 "bdev_name": "Null2", 00:12:42.741 "name": "Null2", 00:12:42.741 "nguid": "23F786F1493F4F2C8FEB990C10532D42", 00:12:42.741 "uuid": "23f786f1-493f-4f2c-8feb-990c10532d42" 00:12:42.741 } 00:12:42.741 ] 00:12:42.741 }, 00:12:42.741 { 00:12:42.741 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:42.741 "subtype": "NVMe", 00:12:42.741 "listen_addresses": [ 00:12:42.741 { 00:12:42.741 "trtype": "TCP", 00:12:42.741 "adrfam": "IPv4", 00:12:42.741 "traddr": "10.0.0.2", 00:12:42.741 "trsvcid": "4420" 00:12:42.741 } 00:12:42.741 ], 00:12:42.741 "allow_any_host": true, 00:12:42.741 "hosts": [], 00:12:42.741 "serial_number": "SPDK00000000000003", 00:12:42.741 "model_number": "SPDK bdev Controller", 00:12:42.741 "max_namespaces": 32, 00:12:42.741 "min_cntlid": 1, 00:12:42.741 "max_cntlid": 65519, 00:12:42.741 "namespaces": [ 00:12:42.741 { 00:12:42.741 "nsid": 1, 00:12:42.741 "bdev_name": "Null3", 00:12:42.741 "name": "Null3", 00:12:42.741 "nguid": "8A7651378EEB4893BF1E7027A96F7055", 00:12:42.741 "uuid": "8a765137-8eeb-4893-bf1e-7027a96f7055" 00:12:42.741 } 00:12:42.741 ] 00:12:42.741 }, 00:12:42.741 { 00:12:42.741 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:42.741 "subtype": "NVMe", 00:12:42.741 "listen_addresses": [ 00:12:42.741 { 00:12:42.741 "trtype": "TCP", 00:12:42.741 "adrfam": "IPv4", 00:12:42.741 "traddr": "10.0.0.2", 00:12:42.741 "trsvcid": "4420" 00:12:42.741 } 00:12:42.741 ], 00:12:42.741 "allow_any_host": true, 00:12:42.741 "hosts": [], 00:12:42.741 "serial_number": "SPDK00000000000004", 00:12:42.741 "model_number": "SPDK bdev Controller", 00:12:42.741 "max_namespaces": 32, 00:12:42.741 "min_cntlid": 1, 00:12:42.741 "max_cntlid": 65519, 00:12:42.741 "namespaces": [ 00:12:42.741 { 00:12:42.741 "nsid": 1, 00:12:42.741 "bdev_name": "Null4", 00:12:42.741 "name": "Null4", 00:12:42.741 "nguid": "502ACBEC926840559EB84A0B1CBCEE9B", 00:12:42.741 "uuid": "502acbec-9268-4055-9eb8-4a0b1cbcee9b" 00:12:42.741 } 00:12:42.741 ] 00:12:42.741 } 00:12:42.741 ] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.741 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.742 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.742 rmmod nvme_tcp 00:12:42.742 rmmod nvme_fabrics 00:12:43.000 rmmod nvme_keyring 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1062966 ']' 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1062966 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1062966 ']' 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1062966 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1062966 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1062966' 00:12:43.000 killing process with pid 1062966 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1062966 00:12:43.000 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1062966 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.259 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.182 00:12:45.182 real 0m5.653s 00:12:45.182 user 0m4.666s 00:12:45.182 sys 0m1.988s 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.182 ************************************ 00:12:45.182 END TEST nvmf_target_discovery 00:12:45.182 ************************************ 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.182 ************************************ 00:12:45.182 START TEST nvmf_referrals 00:12:45.182 ************************************ 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:45.182 * Looking for test storage... 00:12:45.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.182 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:45.441 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.442 --rc genhtml_branch_coverage=1 00:12:45.442 --rc genhtml_function_coverage=1 00:12:45.442 --rc genhtml_legend=1 00:12:45.442 --rc geninfo_all_blocks=1 00:12:45.442 --rc geninfo_unexecuted_blocks=1 00:12:45.442 00:12:45.442 ' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.442 --rc genhtml_branch_coverage=1 00:12:45.442 --rc genhtml_function_coverage=1 00:12:45.442 --rc genhtml_legend=1 00:12:45.442 --rc geninfo_all_blocks=1 00:12:45.442 --rc geninfo_unexecuted_blocks=1 00:12:45.442 00:12:45.442 ' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.442 --rc genhtml_branch_coverage=1 00:12:45.442 --rc genhtml_function_coverage=1 00:12:45.442 --rc genhtml_legend=1 00:12:45.442 --rc geninfo_all_blocks=1 00:12:45.442 --rc geninfo_unexecuted_blocks=1 00:12:45.442 00:12:45.442 ' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.442 --rc genhtml_branch_coverage=1 00:12:45.442 --rc genhtml_function_coverage=1 00:12:45.442 --rc genhtml_legend=1 00:12:45.442 --rc geninfo_all_blocks=1 00:12:45.442 --rc geninfo_unexecuted_blocks=1 00:12:45.442 00:12:45.442 ' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.442 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.981 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:47.982 00:12:47.982 --- 10.0.0.2 ping statistics --- 00:12:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.982 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:12:47.982 00:12:47.982 --- 10.0.0.1 ping statistics --- 00:12:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.982 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1065057 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1065057 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1065057 ']' 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.982 [2024-12-06 19:10:58.254414] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:47.982 [2024-12-06 19:10:58.254494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.982 [2024-12-06 19:10:58.328475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.982 [2024-12-06 19:10:58.387716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.982 [2024-12-06 19:10:58.387782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.982 [2024-12-06 19:10:58.387797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.982 [2024-12-06 19:10:58.387809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.982 [2024-12-06 19:10:58.387821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.982 [2024-12-06 19:10:58.389483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.982 [2024-12-06 19:10:58.389540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.982 [2024-12-06 19:10:58.389562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.982 [2024-12-06 19:10:58.389565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.982 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.982 [2024-12-06 19:10:58.549317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 [2024-12-06 19:10:58.576862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.240 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.499 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.756 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:49.013 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:49.014 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.014 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.272 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.530 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.788 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:50.045 rmmod nvme_tcp 00:12:50.045 rmmod nvme_fabrics 00:12:50.045 rmmod nvme_keyring 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:50.045 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:50.046 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1065057 ']' 00:12:50.046 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1065057 00:12:50.046 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1065057 ']' 00:12:50.046 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1065057 00:12:50.046 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1065057 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1065057' 00:12:50.303 killing process with pid 1065057 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1065057 00:12:50.303 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1065057 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.563 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.464 00:12:52.464 real 0m7.240s 00:12:52.464 user 0m11.462s 00:12:52.464 sys 0m2.347s 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 ************************************ 00:12:52.464 END TEST nvmf_referrals 00:12:52.464 ************************************ 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 ************************************ 00:12:52.464 START TEST nvmf_connect_disconnect 00:12:52.464 ************************************ 00:12:52.464 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:52.724 * Looking for test storage... 00:12:52.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.724 --rc genhtml_branch_coverage=1 00:12:52.724 --rc genhtml_function_coverage=1 00:12:52.724 --rc genhtml_legend=1 00:12:52.724 --rc geninfo_all_blocks=1 00:12:52.724 --rc geninfo_unexecuted_blocks=1 00:12:52.724 00:12:52.724 ' 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.724 --rc genhtml_branch_coverage=1 00:12:52.724 --rc genhtml_function_coverage=1 00:12:52.724 --rc genhtml_legend=1 00:12:52.724 --rc geninfo_all_blocks=1 00:12:52.724 --rc geninfo_unexecuted_blocks=1 00:12:52.724 00:12:52.724 ' 00:12:52.724 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.724 --rc genhtml_branch_coverage=1 00:12:52.724 --rc genhtml_function_coverage=1 00:12:52.724 --rc genhtml_legend=1 00:12:52.724 --rc geninfo_all_blocks=1 00:12:52.724 --rc geninfo_unexecuted_blocks=1 00:12:52.724 00:12:52.724 ' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.725 --rc genhtml_branch_coverage=1 00:12:52.725 --rc genhtml_function_coverage=1 00:12:52.725 --rc genhtml_legend=1 00:12:52.725 --rc geninfo_all_blocks=1 00:12:52.725 --rc geninfo_unexecuted_blocks=1 00:12:52.725 00:12:52.725 ' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.725 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.258 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:12:55.259 00:12:55.259 --- 10.0.0.2 ping statistics --- 00:12:55.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.259 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:12:55.259 00:12:55.259 --- 10.0.0.1 ping statistics --- 00:12:55.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.259 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1067373 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1067373 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1067373 ']' 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.259 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.259 [2024-12-06 19:11:05.622781] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:55.259 [2024-12-06 19:11:05.622862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.259 [2024-12-06 19:11:05.692723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.259 [2024-12-06 19:11:05.751322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.259 [2024-12-06 19:11:05.751373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.259 [2024-12-06 19:11:05.751396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.259 [2024-12-06 19:11:05.751407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.259 [2024-12-06 19:11:05.751416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.259 [2024-12-06 19:11:05.752901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.259 [2024-12-06 19:11:05.752960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.259 [2024-12-06 19:11:05.753026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.259 [2024-12-06 19:11:05.753029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 [2024-12-06 19:11:05.912271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 [2024-12-06 19:11:05.982336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:55.517 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:55.518 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:58.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.756 rmmod nvme_tcp 00:13:09.756 rmmod nvme_fabrics 00:13:09.756 rmmod nvme_keyring 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1067373 ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1067373 ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1067373' 00:13:09.756 killing process with pid 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1067373 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.756 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.757 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.665 00:13:11.665 real 0m19.015s 00:13:11.665 user 0m56.436s 00:13:11.665 sys 0m3.561s 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:11.665 ************************************ 00:13:11.665 END TEST nvmf_connect_disconnect 00:13:11.665 ************************************ 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.665 ************************************ 00:13:11.665 START TEST nvmf_multitarget 00:13:11.665 ************************************ 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:11.665 * Looking for test storage... 00:13:11.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:11.665 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.666 --rc genhtml_branch_coverage=1 00:13:11.666 --rc genhtml_function_coverage=1 00:13:11.666 --rc genhtml_legend=1 00:13:11.666 --rc geninfo_all_blocks=1 00:13:11.666 --rc geninfo_unexecuted_blocks=1 00:13:11.666 00:13:11.666 ' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.666 --rc genhtml_branch_coverage=1 00:13:11.666 --rc genhtml_function_coverage=1 00:13:11.666 --rc genhtml_legend=1 00:13:11.666 --rc geninfo_all_blocks=1 00:13:11.666 --rc geninfo_unexecuted_blocks=1 00:13:11.666 00:13:11.666 ' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.666 --rc genhtml_branch_coverage=1 00:13:11.666 --rc genhtml_function_coverage=1 00:13:11.666 --rc genhtml_legend=1 00:13:11.666 --rc geninfo_all_blocks=1 00:13:11.666 --rc geninfo_unexecuted_blocks=1 00:13:11.666 00:13:11.666 ' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.666 --rc genhtml_branch_coverage=1 00:13:11.666 --rc genhtml_function_coverage=1 00:13:11.666 --rc genhtml_legend=1 00:13:11.666 --rc geninfo_all_blocks=1 00:13:11.666 --rc geninfo_unexecuted_blocks=1 00:13:11.666 00:13:11.666 ' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.666 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.667 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.199 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:14.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:14.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:14.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:14.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:13:14.200 00:13:14.200 --- 10.0.0.2 ping statistics --- 00:13:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.200 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:13:14.200 00:13:14.200 --- 10.0.0.1 ping statistics --- 00:13:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.200 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1071129 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1071129 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1071129 ']' 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.200 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 [2024-12-06 19:11:24.613721] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:14.200 [2024-12-06 19:11:24.613825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.200 [2024-12-06 19:11:24.685811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.200 [2024-12-06 19:11:24.745058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.200 [2024-12-06 19:11:24.745113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.200 [2024-12-06 19:11:24.745141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.200 [2024-12-06 19:11:24.745152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.200 [2024-12-06 19:11:24.745162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.200 [2024-12-06 19:11:24.746690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.201 [2024-12-06 19:11:24.746750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.201 [2024-12-06 19:11:24.746816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.201 [2024-12-06 19:11:24.746819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:14.459 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:14.459 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:14.459 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:14.715 "nvmf_tgt_1" 00:13:14.715 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:14.715 "nvmf_tgt_2" 00:13:14.715 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:14.715 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:14.973 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:14.973 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:14.973 true 00:13:14.973 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:15.230 true 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.230 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.230 rmmod nvme_tcp 00:13:15.230 rmmod nvme_fabrics 00:13:15.230 rmmod nvme_keyring 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1071129 ']' 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1071129 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1071129 ']' 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1071129 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1071129 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1071129' 00:13:15.488 killing process with pid 1071129 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1071129 00:13:15.488 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1071129 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.748 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.650 00:13:17.650 real 0m6.080s 00:13:17.650 user 0m7.079s 00:13:17.650 sys 0m2.078s 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:17.650 ************************************ 00:13:17.650 END TEST nvmf_multitarget 00:13:17.650 ************************************ 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.650 ************************************ 00:13:17.650 START TEST nvmf_rpc 00:13:17.650 ************************************ 00:13:17.650 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:17.909 * Looking for test storage... 00:13:17.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:17.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.909 --rc genhtml_branch_coverage=1 00:13:17.909 --rc genhtml_function_coverage=1 00:13:17.909 --rc genhtml_legend=1 00:13:17.909 --rc geninfo_all_blocks=1 00:13:17.909 --rc geninfo_unexecuted_blocks=1 00:13:17.909 00:13:17.909 ' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:17.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.909 --rc genhtml_branch_coverage=1 00:13:17.909 --rc genhtml_function_coverage=1 00:13:17.909 --rc genhtml_legend=1 00:13:17.909 --rc geninfo_all_blocks=1 00:13:17.909 --rc geninfo_unexecuted_blocks=1 00:13:17.909 00:13:17.909 ' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:17.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.909 --rc genhtml_branch_coverage=1 00:13:17.909 --rc genhtml_function_coverage=1 00:13:17.909 --rc genhtml_legend=1 00:13:17.909 --rc geninfo_all_blocks=1 00:13:17.909 --rc geninfo_unexecuted_blocks=1 00:13:17.909 00:13:17.909 ' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:17.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.909 --rc genhtml_branch_coverage=1 00:13:17.909 --rc genhtml_function_coverage=1 00:13:17.909 --rc genhtml_legend=1 00:13:17.909 --rc geninfo_all_blocks=1 00:13:17.909 --rc geninfo_unexecuted_blocks=1 00:13:17.909 00:13:17.909 ' 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.909 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.910 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.442 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.443 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.443 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.443 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.443 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:13:20.443 00:13:20.443 --- 10.0.0.2 ping statistics --- 00:13:20.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.443 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:13:20.443 00:13:20.443 --- 10.0.0.1 ping statistics --- 00:13:20.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.443 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1073253 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1073253 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1073253 ']' 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.443 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.444 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.444 [2024-12-06 19:11:30.746917] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:20.444 [2024-12-06 19:11:30.747013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.444 [2024-12-06 19:11:30.821330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.444 [2024-12-06 19:11:30.881440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.444 [2024-12-06 19:11:30.881499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.444 [2024-12-06 19:11:30.881527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.444 [2024-12-06 19:11:30.881538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.444 [2024-12-06 19:11:30.881547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.444 [2024-12-06 19:11:30.883178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.444 [2024-12-06 19:11:30.883288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.444 [2024-12-06 19:11:30.883369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.444 [2024-12-06 19:11:30.883372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.444 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.444 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:20.444 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.444 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.444 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:20.701 "tick_rate": 2700000000, 00:13:20.701 "poll_groups": [ 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_000", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_001", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_002", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_003", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [] 00:13:20.701 } 00:13:20.701 ] 00:13:20.701 }' 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.701 [2024-12-06 19:11:31.109659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.701 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:20.701 "tick_rate": 2700000000, 00:13:20.701 "poll_groups": [ 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_000", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [ 00:13:20.701 { 00:13:20.701 "trtype": "TCP" 00:13:20.701 } 00:13:20.701 ] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_001", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [ 00:13:20.701 { 00:13:20.701 "trtype": "TCP" 00:13:20.701 } 00:13:20.701 ] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_002", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.701 "current_io_qpairs": 0, 00:13:20.701 "pending_bdev_io": 0, 00:13:20.701 "completed_nvme_io": 0, 00:13:20.701 "transports": [ 00:13:20.701 { 00:13:20.701 "trtype": "TCP" 00:13:20.701 } 00:13:20.701 ] 00:13:20.701 }, 00:13:20.701 { 00:13:20.701 "name": "nvmf_tgt_poll_group_003", 00:13:20.701 "admin_qpairs": 0, 00:13:20.701 "io_qpairs": 0, 00:13:20.701 "current_admin_qpairs": 0, 00:13:20.702 "current_io_qpairs": 0, 00:13:20.702 "pending_bdev_io": 0, 00:13:20.702 "completed_nvme_io": 0, 00:13:20.702 "transports": [ 00:13:20.702 { 00:13:20.702 "trtype": "TCP" 00:13:20.702 } 00:13:20.702 ] 00:13:20.702 } 00:13:20.702 ] 00:13:20.702 }' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.702 Malloc1 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.702 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.958 [2024-12-06 19:11:31.284463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:20.958 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:20.959 [2024-12-06 19:11:31.307050] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:20.959 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:20.959 could not add new controller: failed to write to nvme-fabrics device 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.959 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.523 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.523 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.523 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.523 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.523 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.047 [2024-12-06 19:11:34.146428] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:24.047 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.047 could not add new controller: failed to write to nvme-fabrics device 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.047 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.305 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.305 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.305 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.305 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:24.306 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 [2024-12-06 19:11:36.902403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.830 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.089 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.089 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.089 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.089 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.089 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.990 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 [2024-12-06 19:11:39.685642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.249 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.816 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.816 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:29.816 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.816 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:29.816 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 [2024-12-06 19:11:42.509597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.344 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.345 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.909 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.909 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.909 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.909 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:32.909 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 [2024-12-06 19:11:45.331515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.807 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.740 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.740 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.740 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.740 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:35.740 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 [2024-12-06 19:11:48.164530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.637 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.202 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.202 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.202 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.202 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:38.202 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.729 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 [2024-12-06 19:11:50.953794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 [2024-12-06 19:11:51.001843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 [2024-12-06 19:11:51.050019] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 [2024-12-06 19:11:51.098165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.730 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 [2024-12-06 19:11:51.146341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:40.731 "tick_rate": 2700000000, 00:13:40.731 "poll_groups": [ 00:13:40.731 { 00:13:40.731 "name": "nvmf_tgt_poll_group_000", 00:13:40.731 "admin_qpairs": 2, 00:13:40.731 "io_qpairs": 84, 00:13:40.731 "current_admin_qpairs": 0, 00:13:40.731 "current_io_qpairs": 0, 00:13:40.731 "pending_bdev_io": 0, 00:13:40.731 "completed_nvme_io": 134, 00:13:40.731 "transports": [ 00:13:40.731 { 00:13:40.731 "trtype": "TCP" 00:13:40.731 } 00:13:40.731 ] 00:13:40.731 }, 00:13:40.731 { 00:13:40.731 "name": "nvmf_tgt_poll_group_001", 00:13:40.731 "admin_qpairs": 2, 00:13:40.731 "io_qpairs": 84, 00:13:40.731 "current_admin_qpairs": 0, 00:13:40.731 "current_io_qpairs": 0, 00:13:40.731 "pending_bdev_io": 0, 00:13:40.731 "completed_nvme_io": 134, 00:13:40.731 "transports": [ 00:13:40.731 { 00:13:40.731 "trtype": "TCP" 00:13:40.731 } 00:13:40.731 ] 00:13:40.731 }, 00:13:40.731 { 00:13:40.731 "name": "nvmf_tgt_poll_group_002", 00:13:40.731 "admin_qpairs": 1, 00:13:40.731 "io_qpairs": 84, 00:13:40.731 "current_admin_qpairs": 0, 00:13:40.731 "current_io_qpairs": 0, 00:13:40.731 "pending_bdev_io": 0, 00:13:40.731 "completed_nvme_io": 235, 00:13:40.731 "transports": [ 00:13:40.731 { 00:13:40.731 "trtype": "TCP" 00:13:40.731 } 00:13:40.731 ] 00:13:40.731 }, 00:13:40.731 { 00:13:40.731 "name": "nvmf_tgt_poll_group_003", 00:13:40.731 "admin_qpairs": 2, 00:13:40.731 "io_qpairs": 84, 00:13:40.731 "current_admin_qpairs": 0, 00:13:40.731 "current_io_qpairs": 0, 00:13:40.731 "pending_bdev_io": 0, 00:13:40.731 "completed_nvme_io": 183, 00:13:40.731 "transports": [ 00:13:40.731 { 00:13:40.731 "trtype": "TCP" 00:13:40.731 } 00:13:40.731 ] 00:13:40.731 } 00:13:40.731 ] 00:13:40.731 }' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.731 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.731 rmmod nvme_tcp 00:13:40.731 rmmod nvme_fabrics 00:13:40.989 rmmod nvme_keyring 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1073253 ']' 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1073253 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1073253 ']' 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1073253 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1073253 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1073253' 00:13:40.989 killing process with pid 1073253 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1073253 00:13:40.989 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1073253 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.249 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.158 00:13:43.158 real 0m25.467s 00:13:43.158 user 1m22.146s 00:13:43.158 sys 0m4.339s 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.158 ************************************ 00:13:43.158 END TEST nvmf_rpc 00:13:43.158 ************************************ 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.158 ************************************ 00:13:43.158 START TEST nvmf_invalid 00:13:43.158 ************************************ 00:13:43.158 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:43.418 * Looking for test storage... 00:13:43.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.418 --rc genhtml_branch_coverage=1 00:13:43.418 --rc genhtml_function_coverage=1 00:13:43.418 --rc genhtml_legend=1 00:13:43.418 --rc geninfo_all_blocks=1 00:13:43.418 --rc geninfo_unexecuted_blocks=1 00:13:43.418 00:13:43.418 ' 00:13:43.418 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.418 --rc genhtml_branch_coverage=1 00:13:43.418 --rc genhtml_function_coverage=1 00:13:43.419 --rc genhtml_legend=1 00:13:43.419 --rc geninfo_all_blocks=1 00:13:43.419 --rc geninfo_unexecuted_blocks=1 00:13:43.419 00:13:43.419 ' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.419 --rc genhtml_branch_coverage=1 00:13:43.419 --rc genhtml_function_coverage=1 00:13:43.419 --rc genhtml_legend=1 00:13:43.419 --rc geninfo_all_blocks=1 00:13:43.419 --rc geninfo_unexecuted_blocks=1 00:13:43.419 00:13:43.419 ' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.419 --rc genhtml_branch_coverage=1 00:13:43.419 --rc genhtml_function_coverage=1 00:13:43.419 --rc genhtml_legend=1 00:13:43.419 --rc geninfo_all_blocks=1 00:13:43.419 --rc geninfo_unexecuted_blocks=1 00:13:43.419 00:13:43.419 ' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.419 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.952 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:45.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:45.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:45.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:45.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.953 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.953 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.953 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.953 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:13:45.954 00:13:45.954 --- 10.0.0.2 ping statistics --- 00:13:45.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.954 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:13:45.954 00:13:45.954 --- 10.0.0.1 ping statistics --- 00:13:45.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.954 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1077755 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1077755 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1077755 ']' 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:45.954 [2024-12-06 19:11:56.132453] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:45.954 [2024-12-06 19:11:56.132547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.954 [2024-12-06 19:11:56.206121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.954 [2024-12-06 19:11:56.267020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.954 [2024-12-06 19:11:56.267068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.954 [2024-12-06 19:11:56.267093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.954 [2024-12-06 19:11:56.267104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.954 [2024-12-06 19:11:56.267113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.954 [2024-12-06 19:11:56.268699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.954 [2024-12-06 19:11:56.268755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.954 [2024-12-06 19:11:56.268752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.954 [2024-12-06 19:11:56.268727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:45.954 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26102 00:13:46.212 [2024-12-06 19:11:56.658266] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:46.212 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:46.212 { 00:13:46.212 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:13:46.212 "tgt_name": "foobar", 00:13:46.212 "method": "nvmf_create_subsystem", 00:13:46.212 "req_id": 1 00:13:46.212 } 00:13:46.212 Got JSON-RPC error response 00:13:46.212 response: 00:13:46.212 { 00:13:46.212 "code": -32603, 00:13:46.212 "message": "Unable to find target foobar" 00:13:46.212 }' 00:13:46.212 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:46.212 { 00:13:46.212 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:13:46.212 "tgt_name": "foobar", 00:13:46.212 "method": "nvmf_create_subsystem", 00:13:46.212 "req_id": 1 00:13:46.212 } 00:13:46.212 Got JSON-RPC error response 00:13:46.212 response: 00:13:46.212 { 00:13:46.212 "code": -32603, 00:13:46.212 "message": "Unable to find target foobar" 00:13:46.212 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:46.212 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:46.212 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2794 00:13:46.471 [2024-12-06 19:11:56.939234] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2794: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:46.471 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:46.471 { 00:13:46.471 "nqn": "nqn.2016-06.io.spdk:cnode2794", 00:13:46.471 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:46.471 "method": "nvmf_create_subsystem", 00:13:46.471 "req_id": 1 00:13:46.471 } 00:13:46.471 Got JSON-RPC error response 00:13:46.471 response: 00:13:46.471 { 00:13:46.471 "code": -32602, 00:13:46.471 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:46.471 }' 00:13:46.471 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:46.471 { 00:13:46.471 "nqn": "nqn.2016-06.io.spdk:cnode2794", 00:13:46.471 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:46.471 "method": "nvmf_create_subsystem", 00:13:46.471 "req_id": 1 00:13:46.471 } 00:13:46.471 Got JSON-RPC error response 00:13:46.471 response: 00:13:46.471 { 00:13:46.471 "code": -32602, 00:13:46.471 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:46.471 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:46.471 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:46.471 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9165 00:13:46.730 [2024-12-06 19:11:57.216068] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9165: invalid model number 'SPDK_Controller' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:46.730 { 00:13:46.730 "nqn": "nqn.2016-06.io.spdk:cnode9165", 00:13:46.730 "model_number": "SPDK_Controller\u001f", 00:13:46.730 "method": "nvmf_create_subsystem", 00:13:46.730 "req_id": 1 00:13:46.730 } 00:13:46.730 Got JSON-RPC error response 00:13:46.730 response: 00:13:46.730 { 00:13:46.730 "code": -32602, 00:13:46.730 "message": "Invalid MN SPDK_Controller\u001f" 00:13:46.730 }' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:46.730 { 00:13:46.730 "nqn": "nqn.2016-06.io.spdk:cnode9165", 00:13:46.730 "model_number": "SPDK_Controller\u001f", 00:13:46.730 "method": "nvmf_create_subsystem", 00:13:46.730 "req_id": 1 00:13:46.730 } 00:13:46.730 Got JSON-RPC error response 00:13:46.730 response: 00:13:46.730 { 00:13:46.730 "code": -32602, 00:13:46.730 "message": "Invalid MN SPDK_Controller\u001f" 00:13:46.730 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.730 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:46.731 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.039 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'E}stxlE7b}mX[~2@]k#E|' 00:13:47.040 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'E}stxlE7b}mX[~2@]k#E|' nqn.2016-06.io.spdk:cnode31386 00:13:47.040 [2024-12-06 19:11:57.573281] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31386: invalid serial number 'E}stxlE7b}mX[~2@]k#E|' 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:47.355 { 00:13:47.355 "nqn": "nqn.2016-06.io.spdk:cnode31386", 00:13:47.355 "serial_number": "E}stxlE7b}mX[~2@]k#E|", 00:13:47.355 "method": "nvmf_create_subsystem", 00:13:47.355 "req_id": 1 00:13:47.355 } 00:13:47.355 Got JSON-RPC error response 00:13:47.355 response: 00:13:47.355 { 00:13:47.355 "code": -32602, 00:13:47.355 "message": "Invalid SN E}stxlE7b}mX[~2@]k#E|" 00:13:47.355 }' 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:47.355 { 00:13:47.355 "nqn": "nqn.2016-06.io.spdk:cnode31386", 00:13:47.355 "serial_number": "E}stxlE7b}mX[~2@]k#E|", 00:13:47.355 "method": "nvmf_create_subsystem", 00:13:47.355 "req_id": 1 00:13:47.355 } 00:13:47.355 Got JSON-RPC error response 00:13:47.355 response: 00:13:47.355 { 00:13:47.355 "code": -32602, 00:13:47.355 "message": "Invalid SN E}stxlE7b}mX[~2@]k#E|" 00:13:47.355 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:47.355 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.356 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.357 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ A == \- ]] 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'A/cG#1s{hW"."m3Tk6v572(2_m&N~?X&j5#$}-iqW' 00:13:47.358 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'A/cG#1s{hW"."m3Tk6v572(2_m&N~?X&j5#$}-iqW' nqn.2016-06.io.spdk:cnode15507 00:13:47.615 [2024-12-06 19:11:58.034715] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15507: invalid model number 'A/cG#1s{hW"."m3Tk6v572(2_m&N~?X&j5#$}-iqW' 00:13:47.615 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:47.615 { 00:13:47.615 "nqn": "nqn.2016-06.io.spdk:cnode15507", 00:13:47.615 "model_number": "A/cG#1s{hW\".\"m3Tk6v572(2_m&N~?X&j5#$}-iqW", 00:13:47.615 "method": "nvmf_create_subsystem", 00:13:47.615 "req_id": 1 00:13:47.615 } 00:13:47.616 Got JSON-RPC error response 00:13:47.616 response: 00:13:47.616 { 00:13:47.616 "code": -32602, 00:13:47.616 "message": "Invalid MN A/cG#1s{hW\".\"m3Tk6v572(2_m&N~?X&j5#$}-iqW" 00:13:47.616 }' 00:13:47.616 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:47.616 { 00:13:47.616 "nqn": "nqn.2016-06.io.spdk:cnode15507", 00:13:47.616 "model_number": "A/cG#1s{hW\".\"m3Tk6v572(2_m&N~?X&j5#$}-iqW", 00:13:47.616 "method": "nvmf_create_subsystem", 00:13:47.616 "req_id": 1 00:13:47.616 } 00:13:47.616 Got JSON-RPC error response 00:13:47.616 response: 00:13:47.616 { 00:13:47.616 "code": -32602, 00:13:47.616 "message": "Invalid MN A/cG#1s{hW\".\"m3Tk6v572(2_m&N~?X&j5#$}-iqW" 00:13:47.616 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:47.616 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:47.873 [2024-12-06 19:11:58.307676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.873 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:48.130 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:48.130 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:48.130 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:48.130 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:48.130 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:48.388 [2024-12-06 19:11:58.865530] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:48.388 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:48.388 { 00:13:48.388 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:48.388 "listen_address": { 00:13:48.388 "trtype": "tcp", 00:13:48.388 "traddr": "", 00:13:48.388 "trsvcid": "4421" 00:13:48.388 }, 00:13:48.388 "method": "nvmf_subsystem_remove_listener", 00:13:48.388 "req_id": 1 00:13:48.388 } 00:13:48.388 Got JSON-RPC error response 00:13:48.388 response: 00:13:48.388 { 00:13:48.388 "code": -32602, 00:13:48.388 "message": "Invalid parameters" 00:13:48.388 }' 00:13:48.388 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:48.388 { 00:13:48.388 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:48.388 "listen_address": { 00:13:48.388 "trtype": "tcp", 00:13:48.388 "traddr": "", 00:13:48.388 "trsvcid": "4421" 00:13:48.388 }, 00:13:48.388 "method": "nvmf_subsystem_remove_listener", 00:13:48.388 "req_id": 1 00:13:48.388 } 00:13:48.388 Got JSON-RPC error response 00:13:48.388 response: 00:13:48.388 { 00:13:48.388 "code": -32602, 00:13:48.388 "message": "Invalid parameters" 00:13:48.388 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:48.388 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7823 -i 0 00:13:48.646 [2024-12-06 19:11:59.138366] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7823: invalid cntlid range [0-65519] 00:13:48.646 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:48.646 { 00:13:48.646 "nqn": "nqn.2016-06.io.spdk:cnode7823", 00:13:48.646 "min_cntlid": 0, 00:13:48.646 "method": "nvmf_create_subsystem", 00:13:48.646 "req_id": 1 00:13:48.646 } 00:13:48.646 Got JSON-RPC error response 00:13:48.646 response: 00:13:48.646 { 00:13:48.646 "code": -32602, 00:13:48.646 "message": "Invalid cntlid range [0-65519]" 00:13:48.646 }' 00:13:48.646 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:48.646 { 00:13:48.646 "nqn": "nqn.2016-06.io.spdk:cnode7823", 00:13:48.646 "min_cntlid": 0, 00:13:48.646 "method": "nvmf_create_subsystem", 00:13:48.646 "req_id": 1 00:13:48.646 } 00:13:48.646 Got JSON-RPC error response 00:13:48.646 response: 00:13:48.646 { 00:13:48.646 "code": -32602, 00:13:48.646 "message": "Invalid cntlid range [0-65519]" 00:13:48.646 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:48.646 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21676 -i 65520 00:13:48.904 [2024-12-06 19:11:59.415270] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21676: invalid cntlid range [65520-65519] 00:13:48.904 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:48.904 { 00:13:48.904 "nqn": "nqn.2016-06.io.spdk:cnode21676", 00:13:48.904 "min_cntlid": 65520, 00:13:48.904 "method": "nvmf_create_subsystem", 00:13:48.904 "req_id": 1 00:13:48.904 } 00:13:48.904 Got JSON-RPC error response 00:13:48.904 response: 00:13:48.904 { 00:13:48.904 "code": -32602, 00:13:48.904 "message": "Invalid cntlid range [65520-65519]" 00:13:48.904 }' 00:13:48.904 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:48.904 { 00:13:48.904 "nqn": "nqn.2016-06.io.spdk:cnode21676", 00:13:48.904 "min_cntlid": 65520, 00:13:48.904 "method": "nvmf_create_subsystem", 00:13:48.904 "req_id": 1 00:13:48.904 } 00:13:48.904 Got JSON-RPC error response 00:13:48.904 response: 00:13:48.904 { 00:13:48.904 "code": -32602, 00:13:48.904 "message": "Invalid cntlid range [65520-65519]" 00:13:48.904 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:48.904 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27584 -I 0 00:13:49.163 [2024-12-06 19:11:59.700180] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27584: invalid cntlid range [1-0] 00:13:49.163 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:49.163 { 00:13:49.163 "nqn": "nqn.2016-06.io.spdk:cnode27584", 00:13:49.163 "max_cntlid": 0, 00:13:49.163 "method": "nvmf_create_subsystem", 00:13:49.163 "req_id": 1 00:13:49.163 } 00:13:49.163 Got JSON-RPC error response 00:13:49.163 response: 00:13:49.163 { 00:13:49.163 "code": -32602, 00:13:49.163 "message": "Invalid cntlid range [1-0]" 00:13:49.163 }' 00:13:49.163 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:49.163 { 00:13:49.163 "nqn": "nqn.2016-06.io.spdk:cnode27584", 00:13:49.163 "max_cntlid": 0, 00:13:49.163 "method": "nvmf_create_subsystem", 00:13:49.163 "req_id": 1 00:13:49.163 } 00:13:49.163 Got JSON-RPC error response 00:13:49.163 response: 00:13:49.163 { 00:13:49.163 "code": -32602, 00:13:49.163 "message": "Invalid cntlid range [1-0]" 00:13:49.163 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:49.163 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5947 -I 65520 00:13:49.421 [2024-12-06 19:11:59.965079] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5947: invalid cntlid range [1-65520] 00:13:49.421 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:49.421 { 00:13:49.421 "nqn": "nqn.2016-06.io.spdk:cnode5947", 00:13:49.421 "max_cntlid": 65520, 00:13:49.421 "method": "nvmf_create_subsystem", 00:13:49.421 "req_id": 1 00:13:49.421 } 00:13:49.421 Got JSON-RPC error response 00:13:49.421 response: 00:13:49.421 { 00:13:49.421 "code": -32602, 00:13:49.421 "message": "Invalid cntlid range [1-65520]" 00:13:49.421 }' 00:13:49.421 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:49.421 { 00:13:49.421 "nqn": "nqn.2016-06.io.spdk:cnode5947", 00:13:49.421 "max_cntlid": 65520, 00:13:49.421 "method": "nvmf_create_subsystem", 00:13:49.421 "req_id": 1 00:13:49.421 } 00:13:49.421 Got JSON-RPC error response 00:13:49.421 response: 00:13:49.421 { 00:13:49.421 "code": -32602, 00:13:49.421 "message": "Invalid cntlid range [1-65520]" 00:13:49.421 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:49.421 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12144 -i 6 -I 5 00:13:49.987 [2024-12-06 19:12:00.274183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12144: invalid cntlid range [6-5] 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:49.987 { 00:13:49.987 "nqn": "nqn.2016-06.io.spdk:cnode12144", 00:13:49.987 "min_cntlid": 6, 00:13:49.987 "max_cntlid": 5, 00:13:49.987 "method": "nvmf_create_subsystem", 00:13:49.987 "req_id": 1 00:13:49.987 } 00:13:49.987 Got JSON-RPC error response 00:13:49.987 response: 00:13:49.987 { 00:13:49.987 "code": -32602, 00:13:49.987 "message": "Invalid cntlid range [6-5]" 00:13:49.987 }' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:49.987 { 00:13:49.987 "nqn": "nqn.2016-06.io.spdk:cnode12144", 00:13:49.987 "min_cntlid": 6, 00:13:49.987 "max_cntlid": 5, 00:13:49.987 "method": "nvmf_create_subsystem", 00:13:49.987 "req_id": 1 00:13:49.987 } 00:13:49.987 Got JSON-RPC error response 00:13:49.987 response: 00:13:49.987 { 00:13:49.987 "code": -32602, 00:13:49.987 "message": "Invalid cntlid range [6-5]" 00:13:49.987 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:49.987 { 00:13:49.987 "name": "foobar", 00:13:49.987 "method": "nvmf_delete_target", 00:13:49.987 "req_id": 1 00:13:49.987 } 00:13:49.987 Got JSON-RPC error response 00:13:49.987 response: 00:13:49.987 { 00:13:49.987 "code": -32602, 00:13:49.987 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:49.987 }' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:49.987 { 00:13:49.987 "name": "foobar", 00:13:49.987 "method": "nvmf_delete_target", 00:13:49.987 "req_id": 1 00:13:49.987 } 00:13:49.987 Got JSON-RPC error response 00:13:49.987 response: 00:13:49.987 { 00:13:49.987 "code": -32602, 00:13:49.987 "message": "The specified target doesn't exist, cannot delete it." 00:13:49.987 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.987 rmmod nvme_tcp 00:13:49.987 rmmod nvme_fabrics 00:13:49.987 rmmod nvme_keyring 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1077755 ']' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1077755 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1077755 ']' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1077755 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1077755 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1077755' 00:13:49.987 killing process with pid 1077755 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1077755 00:13:49.987 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1077755 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.246 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.247 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.247 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.247 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.247 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.247 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.779 00:13:52.779 real 0m9.065s 00:13:52.779 user 0m21.865s 00:13:52.779 sys 0m2.530s 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:52.779 ************************************ 00:13:52.779 END TEST nvmf_invalid 00:13:52.779 ************************************ 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.779 ************************************ 00:13:52.779 START TEST nvmf_connect_stress 00:13:52.779 ************************************ 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.779 * Looking for test storage... 00:13:52.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.779 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.780 --rc genhtml_branch_coverage=1 00:13:52.780 --rc genhtml_function_coverage=1 00:13:52.780 --rc genhtml_legend=1 00:13:52.780 --rc geninfo_all_blocks=1 00:13:52.780 --rc geninfo_unexecuted_blocks=1 00:13:52.780 00:13:52.780 ' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.780 --rc genhtml_branch_coverage=1 00:13:52.780 --rc genhtml_function_coverage=1 00:13:52.780 --rc genhtml_legend=1 00:13:52.780 --rc geninfo_all_blocks=1 00:13:52.780 --rc geninfo_unexecuted_blocks=1 00:13:52.780 00:13:52.780 ' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.780 --rc genhtml_branch_coverage=1 00:13:52.780 --rc genhtml_function_coverage=1 00:13:52.780 --rc genhtml_legend=1 00:13:52.780 --rc geninfo_all_blocks=1 00:13:52.780 --rc geninfo_unexecuted_blocks=1 00:13:52.780 00:13:52.780 ' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.780 --rc genhtml_branch_coverage=1 00:13:52.780 --rc genhtml_function_coverage=1 00:13:52.780 --rc genhtml_legend=1 00:13:52.780 --rc geninfo_all_blocks=1 00:13:52.780 --rc geninfo_unexecuted_blocks=1 00:13:52.780 00:13:52.780 ' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.780 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:52.780 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.781 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:54.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:54.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:54.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:54.687 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:54.687 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:54.688 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.688 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:54.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:54.948 00:13:54.948 --- 10.0.0.2 ping statistics --- 00:13:54.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.948 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:54.948 00:13:54.948 --- 10.0.0.1 ping statistics --- 00:13:54.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.948 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1080636 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1080636 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1080636 ']' 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.948 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.948 [2024-12-06 19:12:05.503895] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:54.948 [2024-12-06 19:12:05.503973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.206 [2024-12-06 19:12:05.575694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.206 [2024-12-06 19:12:05.635690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.206 [2024-12-06 19:12:05.635753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.206 [2024-12-06 19:12:05.635775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.206 [2024-12-06 19:12:05.635787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.206 [2024-12-06 19:12:05.635796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.206 [2024-12-06 19:12:05.637402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.206 [2024-12-06 19:12:05.637464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.206 [2024-12-06 19:12:05.637467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.206 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.206 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:55.206 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.206 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.206 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 [2024-12-06 19:12:05.795120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 [2024-12-06 19:12:05.812412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 NULL1 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1080658 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.463 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.464 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.721 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.721 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:55.721 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.721 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.721 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.977 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.977 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:55.977 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.977 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.977 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.541 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:56.541 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.541 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.541 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.799 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.799 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:56.799 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.799 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.799 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.056 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.056 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:57.056 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.056 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.056 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.315 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.315 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:57.315 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.315 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.315 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.573 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.573 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:57.573 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.573 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.573 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.139 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:58.140 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.140 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.140 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.397 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.397 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:58.397 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.397 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.397 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.656 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.656 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:58.656 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.656 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.656 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.915 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.915 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:58.915 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.915 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.915 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.174 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.174 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:59.174 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.174 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.174 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.740 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.740 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:59.740 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.740 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.740 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.998 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.998 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:13:59.998 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.998 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.998 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.256 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:00.256 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.256 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.256 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.513 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.513 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:00.513 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.513 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.513 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.770 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.770 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:00.770 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.770 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.770 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.335 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.335 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:01.335 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.335 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.335 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.592 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.592 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:01.592 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.592 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.592 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.850 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.850 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:01.850 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.850 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.850 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.107 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.107 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:02.107 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.107 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.107 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.672 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.672 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:02.672 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.672 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.672 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.929 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.929 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:02.929 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.930 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.930 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.186 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.186 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:03.186 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.186 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.186 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.443 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.443 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:03.443 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.443 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.443 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.700 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.700 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:03.700 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.700 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.700 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.265 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.265 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:04.265 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.265 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.265 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.522 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:04.522 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.522 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.522 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.780 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.780 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:04.780 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.780 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.780 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.037 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.037 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:05.037 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.037 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.037 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.295 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.295 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:05.295 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.295 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.295 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.553 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1080658 00:14:05.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1080658) - No such process 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1080658 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.820 rmmod nvme_tcp 00:14:05.820 rmmod nvme_fabrics 00:14:05.820 rmmod nvme_keyring 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1080636 ']' 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1080636 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1080636 ']' 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1080636 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1080636 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1080636' 00:14:05.820 killing process with pid 1080636 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1080636 00:14:05.820 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1080636 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.078 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.079 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.079 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.981 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.981 00:14:07.981 real 0m15.713s 00:14:07.981 user 0m38.473s 00:14:07.981 sys 0m6.189s 00:14:07.981 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.981 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.981 ************************************ 00:14:07.981 END TEST nvmf_connect_stress 00:14:07.981 ************************************ 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.240 ************************************ 00:14:08.240 START TEST nvmf_fused_ordering 00:14:08.240 ************************************ 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.240 * Looking for test storage... 00:14:08.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:08.240 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.241 --rc genhtml_branch_coverage=1 00:14:08.241 --rc genhtml_function_coverage=1 00:14:08.241 --rc genhtml_legend=1 00:14:08.241 --rc geninfo_all_blocks=1 00:14:08.241 --rc geninfo_unexecuted_blocks=1 00:14:08.241 00:14:08.241 ' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.241 --rc genhtml_branch_coverage=1 00:14:08.241 --rc genhtml_function_coverage=1 00:14:08.241 --rc genhtml_legend=1 00:14:08.241 --rc geninfo_all_blocks=1 00:14:08.241 --rc geninfo_unexecuted_blocks=1 00:14:08.241 00:14:08.241 ' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.241 --rc genhtml_branch_coverage=1 00:14:08.241 --rc genhtml_function_coverage=1 00:14:08.241 --rc genhtml_legend=1 00:14:08.241 --rc geninfo_all_blocks=1 00:14:08.241 --rc geninfo_unexecuted_blocks=1 00:14:08.241 00:14:08.241 ' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.241 --rc genhtml_branch_coverage=1 00:14:08.241 --rc genhtml_function_coverage=1 00:14:08.241 --rc genhtml_legend=1 00:14:08.241 --rc geninfo_all_blocks=1 00:14:08.241 --rc geninfo_unexecuted_blocks=1 00:14:08.241 00:14:08.241 ' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.241 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:10.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.777 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:10.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:10.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:10.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:14:10.778 00:14:10.778 --- 10.0.0.2 ping statistics --- 00:14:10.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.778 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:14:10.778 00:14:10.778 --- 10.0.0.1 ping statistics --- 00:14:10.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.778 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1084333 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1084333 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1084333 ']' 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.778 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 [2024-12-06 19:12:21.019515] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:10.778 [2024-12-06 19:12:21.019604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.778 [2024-12-06 19:12:21.100882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.778 [2024-12-06 19:12:21.154907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.778 [2024-12-06 19:12:21.154984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.778 [2024-12-06 19:12:21.155013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.778 [2024-12-06 19:12:21.155024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.778 [2024-12-06 19:12:21.155033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.778 [2024-12-06 19:12:21.155606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.778 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.778 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:10.778 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.778 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.778 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 [2024-12-06 19:12:21.294780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 [2024-12-06 19:12:21.311017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 NULL1 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:11.037 [2024-12-06 19:12:21.355320] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:11.037 [2024-12-06 19:12:21.355354] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084459 ] 00:14:11.295 Attached to nqn.2016-06.io.spdk:cnode1 00:14:11.295 Namespace ID: 1 size: 1GB 00:14:11.295 fused_ordering(0) 00:14:11.295 fused_ordering(1) 00:14:11.295 fused_ordering(2) 00:14:11.295 fused_ordering(3) 00:14:11.295 fused_ordering(4) 00:14:11.295 fused_ordering(5) 00:14:11.295 fused_ordering(6) 00:14:11.295 fused_ordering(7) 00:14:11.295 fused_ordering(8) 00:14:11.296 fused_ordering(9) 00:14:11.296 fused_ordering(10) 00:14:11.296 fused_ordering(11) 00:14:11.296 fused_ordering(12) 00:14:11.296 fused_ordering(13) 00:14:11.296 fused_ordering(14) 00:14:11.296 fused_ordering(15) 00:14:11.296 fused_ordering(16) 00:14:11.296 fused_ordering(17) 00:14:11.296 fused_ordering(18) 00:14:11.296 fused_ordering(19) 00:14:11.296 fused_ordering(20) 00:14:11.296 fused_ordering(21) 00:14:11.296 fused_ordering(22) 00:14:11.296 fused_ordering(23) 00:14:11.296 fused_ordering(24) 00:14:11.296 fused_ordering(25) 00:14:11.296 fused_ordering(26) 00:14:11.296 fused_ordering(27) 00:14:11.296 fused_ordering(28) 00:14:11.296 fused_ordering(29) 00:14:11.296 fused_ordering(30) 00:14:11.296 fused_ordering(31) 00:14:11.296 fused_ordering(32) 00:14:11.296 fused_ordering(33) 00:14:11.296 fused_ordering(34) 00:14:11.296 fused_ordering(35) 00:14:11.296 fused_ordering(36) 00:14:11.296 fused_ordering(37) 00:14:11.296 fused_ordering(38) 00:14:11.296 fused_ordering(39) 00:14:11.296 fused_ordering(40) 00:14:11.296 fused_ordering(41) 00:14:11.296 fused_ordering(42) 00:14:11.296 fused_ordering(43) 00:14:11.296 fused_ordering(44) 00:14:11.296 fused_ordering(45) 00:14:11.296 fused_ordering(46) 00:14:11.296 fused_ordering(47) 00:14:11.296 fused_ordering(48) 00:14:11.296 fused_ordering(49) 00:14:11.296 fused_ordering(50) 00:14:11.296 fused_ordering(51) 00:14:11.296 fused_ordering(52) 00:14:11.296 fused_ordering(53) 00:14:11.296 fused_ordering(54) 00:14:11.296 fused_ordering(55) 00:14:11.296 fused_ordering(56) 00:14:11.296 fused_ordering(57) 00:14:11.296 fused_ordering(58) 00:14:11.296 fused_ordering(59) 00:14:11.296 fused_ordering(60) 00:14:11.296 fused_ordering(61) 00:14:11.296 fused_ordering(62) 00:14:11.296 fused_ordering(63) 00:14:11.296 fused_ordering(64) 00:14:11.296 fused_ordering(65) 00:14:11.296 fused_ordering(66) 00:14:11.296 fused_ordering(67) 00:14:11.296 fused_ordering(68) 00:14:11.296 fused_ordering(69) 00:14:11.296 fused_ordering(70) 00:14:11.296 fused_ordering(71) 00:14:11.296 fused_ordering(72) 00:14:11.296 fused_ordering(73) 00:14:11.296 fused_ordering(74) 00:14:11.296 fused_ordering(75) 00:14:11.296 fused_ordering(76) 00:14:11.296 fused_ordering(77) 00:14:11.296 fused_ordering(78) 00:14:11.296 fused_ordering(79) 00:14:11.296 fused_ordering(80) 00:14:11.296 fused_ordering(81) 00:14:11.296 fused_ordering(82) 00:14:11.296 fused_ordering(83) 00:14:11.296 fused_ordering(84) 00:14:11.296 fused_ordering(85) 00:14:11.296 fused_ordering(86) 00:14:11.296 fused_ordering(87) 00:14:11.296 fused_ordering(88) 00:14:11.296 fused_ordering(89) 00:14:11.296 fused_ordering(90) 00:14:11.296 fused_ordering(91) 00:14:11.296 fused_ordering(92) 00:14:11.296 fused_ordering(93) 00:14:11.296 fused_ordering(94) 00:14:11.296 fused_ordering(95) 00:14:11.296 fused_ordering(96) 00:14:11.296 fused_ordering(97) 00:14:11.296 fused_ordering(98) 00:14:11.296 fused_ordering(99) 00:14:11.296 fused_ordering(100) 00:14:11.296 fused_ordering(101) 00:14:11.296 fused_ordering(102) 00:14:11.296 fused_ordering(103) 00:14:11.296 fused_ordering(104) 00:14:11.296 fused_ordering(105) 00:14:11.296 fused_ordering(106) 00:14:11.296 fused_ordering(107) 00:14:11.296 fused_ordering(108) 00:14:11.296 fused_ordering(109) 00:14:11.296 fused_ordering(110) 00:14:11.296 fused_ordering(111) 00:14:11.296 fused_ordering(112) 00:14:11.296 fused_ordering(113) 00:14:11.296 fused_ordering(114) 00:14:11.296 fused_ordering(115) 00:14:11.296 fused_ordering(116) 00:14:11.296 fused_ordering(117) 00:14:11.296 fused_ordering(118) 00:14:11.296 fused_ordering(119) 00:14:11.296 fused_ordering(120) 00:14:11.296 fused_ordering(121) 00:14:11.296 fused_ordering(122) 00:14:11.296 fused_ordering(123) 00:14:11.296 fused_ordering(124) 00:14:11.296 fused_ordering(125) 00:14:11.296 fused_ordering(126) 00:14:11.296 fused_ordering(127) 00:14:11.296 fused_ordering(128) 00:14:11.296 fused_ordering(129) 00:14:11.296 fused_ordering(130) 00:14:11.296 fused_ordering(131) 00:14:11.296 fused_ordering(132) 00:14:11.296 fused_ordering(133) 00:14:11.296 fused_ordering(134) 00:14:11.296 fused_ordering(135) 00:14:11.296 fused_ordering(136) 00:14:11.296 fused_ordering(137) 00:14:11.296 fused_ordering(138) 00:14:11.296 fused_ordering(139) 00:14:11.296 fused_ordering(140) 00:14:11.296 fused_ordering(141) 00:14:11.296 fused_ordering(142) 00:14:11.296 fused_ordering(143) 00:14:11.296 fused_ordering(144) 00:14:11.296 fused_ordering(145) 00:14:11.296 fused_ordering(146) 00:14:11.296 fused_ordering(147) 00:14:11.296 fused_ordering(148) 00:14:11.296 fused_ordering(149) 00:14:11.296 fused_ordering(150) 00:14:11.296 fused_ordering(151) 00:14:11.296 fused_ordering(152) 00:14:11.296 fused_ordering(153) 00:14:11.296 fused_ordering(154) 00:14:11.296 fused_ordering(155) 00:14:11.296 fused_ordering(156) 00:14:11.296 fused_ordering(157) 00:14:11.296 fused_ordering(158) 00:14:11.296 fused_ordering(159) 00:14:11.296 fused_ordering(160) 00:14:11.296 fused_ordering(161) 00:14:11.296 fused_ordering(162) 00:14:11.296 fused_ordering(163) 00:14:11.296 fused_ordering(164) 00:14:11.296 fused_ordering(165) 00:14:11.296 fused_ordering(166) 00:14:11.296 fused_ordering(167) 00:14:11.296 fused_ordering(168) 00:14:11.296 fused_ordering(169) 00:14:11.296 fused_ordering(170) 00:14:11.296 fused_ordering(171) 00:14:11.296 fused_ordering(172) 00:14:11.296 fused_ordering(173) 00:14:11.296 fused_ordering(174) 00:14:11.296 fused_ordering(175) 00:14:11.296 fused_ordering(176) 00:14:11.296 fused_ordering(177) 00:14:11.296 fused_ordering(178) 00:14:11.296 fused_ordering(179) 00:14:11.296 fused_ordering(180) 00:14:11.296 fused_ordering(181) 00:14:11.296 fused_ordering(182) 00:14:11.296 fused_ordering(183) 00:14:11.296 fused_ordering(184) 00:14:11.296 fused_ordering(185) 00:14:11.296 fused_ordering(186) 00:14:11.296 fused_ordering(187) 00:14:11.296 fused_ordering(188) 00:14:11.296 fused_ordering(189) 00:14:11.296 fused_ordering(190) 00:14:11.296 fused_ordering(191) 00:14:11.296 fused_ordering(192) 00:14:11.296 fused_ordering(193) 00:14:11.296 fused_ordering(194) 00:14:11.296 fused_ordering(195) 00:14:11.296 fused_ordering(196) 00:14:11.296 fused_ordering(197) 00:14:11.296 fused_ordering(198) 00:14:11.296 fused_ordering(199) 00:14:11.296 fused_ordering(200) 00:14:11.296 fused_ordering(201) 00:14:11.296 fused_ordering(202) 00:14:11.296 fused_ordering(203) 00:14:11.296 fused_ordering(204) 00:14:11.296 fused_ordering(205) 00:14:11.554 fused_ordering(206) 00:14:11.554 fused_ordering(207) 00:14:11.554 fused_ordering(208) 00:14:11.554 fused_ordering(209) 00:14:11.554 fused_ordering(210) 00:14:11.554 fused_ordering(211) 00:14:11.554 fused_ordering(212) 00:14:11.554 fused_ordering(213) 00:14:11.554 fused_ordering(214) 00:14:11.554 fused_ordering(215) 00:14:11.554 fused_ordering(216) 00:14:11.554 fused_ordering(217) 00:14:11.554 fused_ordering(218) 00:14:11.554 fused_ordering(219) 00:14:11.554 fused_ordering(220) 00:14:11.554 fused_ordering(221) 00:14:11.554 fused_ordering(222) 00:14:11.554 fused_ordering(223) 00:14:11.554 fused_ordering(224) 00:14:11.554 fused_ordering(225) 00:14:11.554 fused_ordering(226) 00:14:11.554 fused_ordering(227) 00:14:11.555 fused_ordering(228) 00:14:11.555 fused_ordering(229) 00:14:11.555 fused_ordering(230) 00:14:11.555 fused_ordering(231) 00:14:11.555 fused_ordering(232) 00:14:11.555 fused_ordering(233) 00:14:11.555 fused_ordering(234) 00:14:11.555 fused_ordering(235) 00:14:11.555 fused_ordering(236) 00:14:11.555 fused_ordering(237) 00:14:11.555 fused_ordering(238) 00:14:11.555 fused_ordering(239) 00:14:11.555 fused_ordering(240) 00:14:11.555 fused_ordering(241) 00:14:11.555 fused_ordering(242) 00:14:11.555 fused_ordering(243) 00:14:11.555 fused_ordering(244) 00:14:11.555 fused_ordering(245) 00:14:11.555 fused_ordering(246) 00:14:11.555 fused_ordering(247) 00:14:11.555 fused_ordering(248) 00:14:11.555 fused_ordering(249) 00:14:11.555 fused_ordering(250) 00:14:11.555 fused_ordering(251) 00:14:11.555 fused_ordering(252) 00:14:11.555 fused_ordering(253) 00:14:11.555 fused_ordering(254) 00:14:11.555 fused_ordering(255) 00:14:11.555 fused_ordering(256) 00:14:11.555 fused_ordering(257) 00:14:11.555 fused_ordering(258) 00:14:11.555 fused_ordering(259) 00:14:11.555 fused_ordering(260) 00:14:11.555 fused_ordering(261) 00:14:11.555 fused_ordering(262) 00:14:11.555 fused_ordering(263) 00:14:11.555 fused_ordering(264) 00:14:11.555 fused_ordering(265) 00:14:11.555 fused_ordering(266) 00:14:11.555 fused_ordering(267) 00:14:11.555 fused_ordering(268) 00:14:11.555 fused_ordering(269) 00:14:11.555 fused_ordering(270) 00:14:11.555 fused_ordering(271) 00:14:11.555 fused_ordering(272) 00:14:11.555 fused_ordering(273) 00:14:11.555 fused_ordering(274) 00:14:11.555 fused_ordering(275) 00:14:11.555 fused_ordering(276) 00:14:11.555 fused_ordering(277) 00:14:11.555 fused_ordering(278) 00:14:11.555 fused_ordering(279) 00:14:11.555 fused_ordering(280) 00:14:11.555 fused_ordering(281) 00:14:11.555 fused_ordering(282) 00:14:11.555 fused_ordering(283) 00:14:11.555 fused_ordering(284) 00:14:11.555 fused_ordering(285) 00:14:11.555 fused_ordering(286) 00:14:11.555 fused_ordering(287) 00:14:11.555 fused_ordering(288) 00:14:11.555 fused_ordering(289) 00:14:11.555 fused_ordering(290) 00:14:11.555 fused_ordering(291) 00:14:11.555 fused_ordering(292) 00:14:11.555 fused_ordering(293) 00:14:11.555 fused_ordering(294) 00:14:11.555 fused_ordering(295) 00:14:11.555 fused_ordering(296) 00:14:11.555 fused_ordering(297) 00:14:11.555 fused_ordering(298) 00:14:11.555 fused_ordering(299) 00:14:11.555 fused_ordering(300) 00:14:11.555 fused_ordering(301) 00:14:11.555 fused_ordering(302) 00:14:11.555 fused_ordering(303) 00:14:11.555 fused_ordering(304) 00:14:11.555 fused_ordering(305) 00:14:11.555 fused_ordering(306) 00:14:11.555 fused_ordering(307) 00:14:11.555 fused_ordering(308) 00:14:11.555 fused_ordering(309) 00:14:11.555 fused_ordering(310) 00:14:11.555 fused_ordering(311) 00:14:11.555 fused_ordering(312) 00:14:11.555 fused_ordering(313) 00:14:11.555 fused_ordering(314) 00:14:11.555 fused_ordering(315) 00:14:11.555 fused_ordering(316) 00:14:11.555 fused_ordering(317) 00:14:11.555 fused_ordering(318) 00:14:11.555 fused_ordering(319) 00:14:11.555 fused_ordering(320) 00:14:11.555 fused_ordering(321) 00:14:11.555 fused_ordering(322) 00:14:11.555 fused_ordering(323) 00:14:11.555 fused_ordering(324) 00:14:11.555 fused_ordering(325) 00:14:11.555 fused_ordering(326) 00:14:11.555 fused_ordering(327) 00:14:11.555 fused_ordering(328) 00:14:11.555 fused_ordering(329) 00:14:11.555 fused_ordering(330) 00:14:11.555 fused_ordering(331) 00:14:11.555 fused_ordering(332) 00:14:11.555 fused_ordering(333) 00:14:11.555 fused_ordering(334) 00:14:11.555 fused_ordering(335) 00:14:11.555 fused_ordering(336) 00:14:11.555 fused_ordering(337) 00:14:11.555 fused_ordering(338) 00:14:11.555 fused_ordering(339) 00:14:11.555 fused_ordering(340) 00:14:11.555 fused_ordering(341) 00:14:11.555 fused_ordering(342) 00:14:11.555 fused_ordering(343) 00:14:11.555 fused_ordering(344) 00:14:11.555 fused_ordering(345) 00:14:11.555 fused_ordering(346) 00:14:11.555 fused_ordering(347) 00:14:11.555 fused_ordering(348) 00:14:11.555 fused_ordering(349) 00:14:11.555 fused_ordering(350) 00:14:11.555 fused_ordering(351) 00:14:11.555 fused_ordering(352) 00:14:11.555 fused_ordering(353) 00:14:11.555 fused_ordering(354) 00:14:11.555 fused_ordering(355) 00:14:11.555 fused_ordering(356) 00:14:11.555 fused_ordering(357) 00:14:11.555 fused_ordering(358) 00:14:11.555 fused_ordering(359) 00:14:11.555 fused_ordering(360) 00:14:11.555 fused_ordering(361) 00:14:11.555 fused_ordering(362) 00:14:11.555 fused_ordering(363) 00:14:11.555 fused_ordering(364) 00:14:11.555 fused_ordering(365) 00:14:11.555 fused_ordering(366) 00:14:11.555 fused_ordering(367) 00:14:11.555 fused_ordering(368) 00:14:11.555 fused_ordering(369) 00:14:11.555 fused_ordering(370) 00:14:11.555 fused_ordering(371) 00:14:11.555 fused_ordering(372) 00:14:11.555 fused_ordering(373) 00:14:11.555 fused_ordering(374) 00:14:11.555 fused_ordering(375) 00:14:11.555 fused_ordering(376) 00:14:11.555 fused_ordering(377) 00:14:11.555 fused_ordering(378) 00:14:11.555 fused_ordering(379) 00:14:11.555 fused_ordering(380) 00:14:11.555 fused_ordering(381) 00:14:11.555 fused_ordering(382) 00:14:11.555 fused_ordering(383) 00:14:11.555 fused_ordering(384) 00:14:11.555 fused_ordering(385) 00:14:11.555 fused_ordering(386) 00:14:11.555 fused_ordering(387) 00:14:11.555 fused_ordering(388) 00:14:11.555 fused_ordering(389) 00:14:11.555 fused_ordering(390) 00:14:11.555 fused_ordering(391) 00:14:11.555 fused_ordering(392) 00:14:11.555 fused_ordering(393) 00:14:11.555 fused_ordering(394) 00:14:11.555 fused_ordering(395) 00:14:11.555 fused_ordering(396) 00:14:11.555 fused_ordering(397) 00:14:11.555 fused_ordering(398) 00:14:11.555 fused_ordering(399) 00:14:11.555 fused_ordering(400) 00:14:11.555 fused_ordering(401) 00:14:11.555 fused_ordering(402) 00:14:11.555 fused_ordering(403) 00:14:11.555 fused_ordering(404) 00:14:11.555 fused_ordering(405) 00:14:11.555 fused_ordering(406) 00:14:11.555 fused_ordering(407) 00:14:11.555 fused_ordering(408) 00:14:11.555 fused_ordering(409) 00:14:11.555 fused_ordering(410) 00:14:12.121 fused_ordering(411) 00:14:12.121 fused_ordering(412) 00:14:12.121 fused_ordering(413) 00:14:12.121 fused_ordering(414) 00:14:12.121 fused_ordering(415) 00:14:12.121 fused_ordering(416) 00:14:12.121 fused_ordering(417) 00:14:12.121 fused_ordering(418) 00:14:12.121 fused_ordering(419) 00:14:12.121 fused_ordering(420) 00:14:12.121 fused_ordering(421) 00:14:12.121 fused_ordering(422) 00:14:12.121 fused_ordering(423) 00:14:12.121 fused_ordering(424) 00:14:12.121 fused_ordering(425) 00:14:12.121 fused_ordering(426) 00:14:12.121 fused_ordering(427) 00:14:12.121 fused_ordering(428) 00:14:12.121 fused_ordering(429) 00:14:12.121 fused_ordering(430) 00:14:12.121 fused_ordering(431) 00:14:12.121 fused_ordering(432) 00:14:12.121 fused_ordering(433) 00:14:12.121 fused_ordering(434) 00:14:12.121 fused_ordering(435) 00:14:12.121 fused_ordering(436) 00:14:12.121 fused_ordering(437) 00:14:12.121 fused_ordering(438) 00:14:12.121 fused_ordering(439) 00:14:12.121 fused_ordering(440) 00:14:12.121 fused_ordering(441) 00:14:12.121 fused_ordering(442) 00:14:12.121 fused_ordering(443) 00:14:12.121 fused_ordering(444) 00:14:12.121 fused_ordering(445) 00:14:12.121 fused_ordering(446) 00:14:12.121 fused_ordering(447) 00:14:12.121 fused_ordering(448) 00:14:12.121 fused_ordering(449) 00:14:12.121 fused_ordering(450) 00:14:12.121 fused_ordering(451) 00:14:12.121 fused_ordering(452) 00:14:12.121 fused_ordering(453) 00:14:12.121 fused_ordering(454) 00:14:12.121 fused_ordering(455) 00:14:12.121 fused_ordering(456) 00:14:12.121 fused_ordering(457) 00:14:12.121 fused_ordering(458) 00:14:12.121 fused_ordering(459) 00:14:12.121 fused_ordering(460) 00:14:12.121 fused_ordering(461) 00:14:12.121 fused_ordering(462) 00:14:12.121 fused_ordering(463) 00:14:12.121 fused_ordering(464) 00:14:12.121 fused_ordering(465) 00:14:12.121 fused_ordering(466) 00:14:12.121 fused_ordering(467) 00:14:12.121 fused_ordering(468) 00:14:12.121 fused_ordering(469) 00:14:12.121 fused_ordering(470) 00:14:12.121 fused_ordering(471) 00:14:12.121 fused_ordering(472) 00:14:12.121 fused_ordering(473) 00:14:12.121 fused_ordering(474) 00:14:12.121 fused_ordering(475) 00:14:12.121 fused_ordering(476) 00:14:12.121 fused_ordering(477) 00:14:12.121 fused_ordering(478) 00:14:12.121 fused_ordering(479) 00:14:12.121 fused_ordering(480) 00:14:12.121 fused_ordering(481) 00:14:12.121 fused_ordering(482) 00:14:12.121 fused_ordering(483) 00:14:12.121 fused_ordering(484) 00:14:12.121 fused_ordering(485) 00:14:12.121 fused_ordering(486) 00:14:12.121 fused_ordering(487) 00:14:12.121 fused_ordering(488) 00:14:12.121 fused_ordering(489) 00:14:12.121 fused_ordering(490) 00:14:12.121 fused_ordering(491) 00:14:12.121 fused_ordering(492) 00:14:12.121 fused_ordering(493) 00:14:12.121 fused_ordering(494) 00:14:12.121 fused_ordering(495) 00:14:12.121 fused_ordering(496) 00:14:12.121 fused_ordering(497) 00:14:12.121 fused_ordering(498) 00:14:12.121 fused_ordering(499) 00:14:12.121 fused_ordering(500) 00:14:12.121 fused_ordering(501) 00:14:12.121 fused_ordering(502) 00:14:12.121 fused_ordering(503) 00:14:12.121 fused_ordering(504) 00:14:12.121 fused_ordering(505) 00:14:12.121 fused_ordering(506) 00:14:12.121 fused_ordering(507) 00:14:12.121 fused_ordering(508) 00:14:12.121 fused_ordering(509) 00:14:12.121 fused_ordering(510) 00:14:12.121 fused_ordering(511) 00:14:12.121 fused_ordering(512) 00:14:12.121 fused_ordering(513) 00:14:12.121 fused_ordering(514) 00:14:12.121 fused_ordering(515) 00:14:12.121 fused_ordering(516) 00:14:12.121 fused_ordering(517) 00:14:12.121 fused_ordering(518) 00:14:12.121 fused_ordering(519) 00:14:12.121 fused_ordering(520) 00:14:12.121 fused_ordering(521) 00:14:12.121 fused_ordering(522) 00:14:12.121 fused_ordering(523) 00:14:12.121 fused_ordering(524) 00:14:12.121 fused_ordering(525) 00:14:12.121 fused_ordering(526) 00:14:12.121 fused_ordering(527) 00:14:12.121 fused_ordering(528) 00:14:12.121 fused_ordering(529) 00:14:12.121 fused_ordering(530) 00:14:12.121 fused_ordering(531) 00:14:12.121 fused_ordering(532) 00:14:12.121 fused_ordering(533) 00:14:12.121 fused_ordering(534) 00:14:12.121 fused_ordering(535) 00:14:12.121 fused_ordering(536) 00:14:12.121 fused_ordering(537) 00:14:12.121 fused_ordering(538) 00:14:12.121 fused_ordering(539) 00:14:12.121 fused_ordering(540) 00:14:12.121 fused_ordering(541) 00:14:12.121 fused_ordering(542) 00:14:12.121 fused_ordering(543) 00:14:12.121 fused_ordering(544) 00:14:12.121 fused_ordering(545) 00:14:12.121 fused_ordering(546) 00:14:12.121 fused_ordering(547) 00:14:12.121 fused_ordering(548) 00:14:12.121 fused_ordering(549) 00:14:12.121 fused_ordering(550) 00:14:12.121 fused_ordering(551) 00:14:12.121 fused_ordering(552) 00:14:12.121 fused_ordering(553) 00:14:12.121 fused_ordering(554) 00:14:12.121 fused_ordering(555) 00:14:12.121 fused_ordering(556) 00:14:12.121 fused_ordering(557) 00:14:12.121 fused_ordering(558) 00:14:12.121 fused_ordering(559) 00:14:12.121 fused_ordering(560) 00:14:12.121 fused_ordering(561) 00:14:12.121 fused_ordering(562) 00:14:12.121 fused_ordering(563) 00:14:12.121 fused_ordering(564) 00:14:12.121 fused_ordering(565) 00:14:12.121 fused_ordering(566) 00:14:12.121 fused_ordering(567) 00:14:12.121 fused_ordering(568) 00:14:12.121 fused_ordering(569) 00:14:12.121 fused_ordering(570) 00:14:12.121 fused_ordering(571) 00:14:12.122 fused_ordering(572) 00:14:12.122 fused_ordering(573) 00:14:12.122 fused_ordering(574) 00:14:12.122 fused_ordering(575) 00:14:12.122 fused_ordering(576) 00:14:12.122 fused_ordering(577) 00:14:12.122 fused_ordering(578) 00:14:12.122 fused_ordering(579) 00:14:12.122 fused_ordering(580) 00:14:12.122 fused_ordering(581) 00:14:12.122 fused_ordering(582) 00:14:12.122 fused_ordering(583) 00:14:12.122 fused_ordering(584) 00:14:12.122 fused_ordering(585) 00:14:12.122 fused_ordering(586) 00:14:12.122 fused_ordering(587) 00:14:12.122 fused_ordering(588) 00:14:12.122 fused_ordering(589) 00:14:12.122 fused_ordering(590) 00:14:12.122 fused_ordering(591) 00:14:12.122 fused_ordering(592) 00:14:12.122 fused_ordering(593) 00:14:12.122 fused_ordering(594) 00:14:12.122 fused_ordering(595) 00:14:12.122 fused_ordering(596) 00:14:12.122 fused_ordering(597) 00:14:12.122 fused_ordering(598) 00:14:12.122 fused_ordering(599) 00:14:12.122 fused_ordering(600) 00:14:12.122 fused_ordering(601) 00:14:12.122 fused_ordering(602) 00:14:12.122 fused_ordering(603) 00:14:12.122 fused_ordering(604) 00:14:12.122 fused_ordering(605) 00:14:12.122 fused_ordering(606) 00:14:12.122 fused_ordering(607) 00:14:12.122 fused_ordering(608) 00:14:12.122 fused_ordering(609) 00:14:12.122 fused_ordering(610) 00:14:12.122 fused_ordering(611) 00:14:12.122 fused_ordering(612) 00:14:12.122 fused_ordering(613) 00:14:12.122 fused_ordering(614) 00:14:12.122 fused_ordering(615) 00:14:12.688 fused_ordering(616) 00:14:12.688 fused_ordering(617) 00:14:12.688 fused_ordering(618) 00:14:12.688 fused_ordering(619) 00:14:12.688 fused_ordering(620) 00:14:12.688 fused_ordering(621) 00:14:12.688 fused_ordering(622) 00:14:12.688 fused_ordering(623) 00:14:12.688 fused_ordering(624) 00:14:12.688 fused_ordering(625) 00:14:12.688 fused_ordering(626) 00:14:12.688 fused_ordering(627) 00:14:12.688 fused_ordering(628) 00:14:12.688 fused_ordering(629) 00:14:12.688 fused_ordering(630) 00:14:12.688 fused_ordering(631) 00:14:12.688 fused_ordering(632) 00:14:12.688 fused_ordering(633) 00:14:12.688 fused_ordering(634) 00:14:12.688 fused_ordering(635) 00:14:12.688 fused_ordering(636) 00:14:12.688 fused_ordering(637) 00:14:12.688 fused_ordering(638) 00:14:12.688 fused_ordering(639) 00:14:12.688 fused_ordering(640) 00:14:12.688 fused_ordering(641) 00:14:12.688 fused_ordering(642) 00:14:12.688 fused_ordering(643) 00:14:12.688 fused_ordering(644) 00:14:12.688 fused_ordering(645) 00:14:12.688 fused_ordering(646) 00:14:12.688 fused_ordering(647) 00:14:12.688 fused_ordering(648) 00:14:12.688 fused_ordering(649) 00:14:12.688 fused_ordering(650) 00:14:12.688 fused_ordering(651) 00:14:12.688 fused_ordering(652) 00:14:12.688 fused_ordering(653) 00:14:12.688 fused_ordering(654) 00:14:12.688 fused_ordering(655) 00:14:12.688 fused_ordering(656) 00:14:12.688 fused_ordering(657) 00:14:12.688 fused_ordering(658) 00:14:12.688 fused_ordering(659) 00:14:12.688 fused_ordering(660) 00:14:12.688 fused_ordering(661) 00:14:12.688 fused_ordering(662) 00:14:12.688 fused_ordering(663) 00:14:12.688 fused_ordering(664) 00:14:12.688 fused_ordering(665) 00:14:12.688 fused_ordering(666) 00:14:12.688 fused_ordering(667) 00:14:12.688 fused_ordering(668) 00:14:12.688 fused_ordering(669) 00:14:12.688 fused_ordering(670) 00:14:12.688 fused_ordering(671) 00:14:12.688 fused_ordering(672) 00:14:12.688 fused_ordering(673) 00:14:12.688 fused_ordering(674) 00:14:12.688 fused_ordering(675) 00:14:12.688 fused_ordering(676) 00:14:12.688 fused_ordering(677) 00:14:12.688 fused_ordering(678) 00:14:12.688 fused_ordering(679) 00:14:12.688 fused_ordering(680) 00:14:12.688 fused_ordering(681) 00:14:12.688 fused_ordering(682) 00:14:12.688 fused_ordering(683) 00:14:12.688 fused_ordering(684) 00:14:12.688 fused_ordering(685) 00:14:12.688 fused_ordering(686) 00:14:12.688 fused_ordering(687) 00:14:12.688 fused_ordering(688) 00:14:12.688 fused_ordering(689) 00:14:12.688 fused_ordering(690) 00:14:12.688 fused_ordering(691) 00:14:12.688 fused_ordering(692) 00:14:12.688 fused_ordering(693) 00:14:12.688 fused_ordering(694) 00:14:12.688 fused_ordering(695) 00:14:12.688 fused_ordering(696) 00:14:12.688 fused_ordering(697) 00:14:12.688 fused_ordering(698) 00:14:12.688 fused_ordering(699) 00:14:12.688 fused_ordering(700) 00:14:12.688 fused_ordering(701) 00:14:12.688 fused_ordering(702) 00:14:12.688 fused_ordering(703) 00:14:12.688 fused_ordering(704) 00:14:12.688 fused_ordering(705) 00:14:12.688 fused_ordering(706) 00:14:12.689 fused_ordering(707) 00:14:12.689 fused_ordering(708) 00:14:12.689 fused_ordering(709) 00:14:12.689 fused_ordering(710) 00:14:12.689 fused_ordering(711) 00:14:12.689 fused_ordering(712) 00:14:12.689 fused_ordering(713) 00:14:12.689 fused_ordering(714) 00:14:12.689 fused_ordering(715) 00:14:12.689 fused_ordering(716) 00:14:12.689 fused_ordering(717) 00:14:12.689 fused_ordering(718) 00:14:12.689 fused_ordering(719) 00:14:12.689 fused_ordering(720) 00:14:12.689 fused_ordering(721) 00:14:12.689 fused_ordering(722) 00:14:12.689 fused_ordering(723) 00:14:12.689 fused_ordering(724) 00:14:12.689 fused_ordering(725) 00:14:12.689 fused_ordering(726) 00:14:12.689 fused_ordering(727) 00:14:12.689 fused_ordering(728) 00:14:12.689 fused_ordering(729) 00:14:12.689 fused_ordering(730) 00:14:12.689 fused_ordering(731) 00:14:12.689 fused_ordering(732) 00:14:12.689 fused_ordering(733) 00:14:12.689 fused_ordering(734) 00:14:12.689 fused_ordering(735) 00:14:12.689 fused_ordering(736) 00:14:12.689 fused_ordering(737) 00:14:12.689 fused_ordering(738) 00:14:12.689 fused_ordering(739) 00:14:12.689 fused_ordering(740) 00:14:12.689 fused_ordering(741) 00:14:12.689 fused_ordering(742) 00:14:12.689 fused_ordering(743) 00:14:12.689 fused_ordering(744) 00:14:12.689 fused_ordering(745) 00:14:12.689 fused_ordering(746) 00:14:12.689 fused_ordering(747) 00:14:12.689 fused_ordering(748) 00:14:12.689 fused_ordering(749) 00:14:12.689 fused_ordering(750) 00:14:12.689 fused_ordering(751) 00:14:12.689 fused_ordering(752) 00:14:12.689 fused_ordering(753) 00:14:12.689 fused_ordering(754) 00:14:12.689 fused_ordering(755) 00:14:12.689 fused_ordering(756) 00:14:12.689 fused_ordering(757) 00:14:12.689 fused_ordering(758) 00:14:12.689 fused_ordering(759) 00:14:12.689 fused_ordering(760) 00:14:12.689 fused_ordering(761) 00:14:12.689 fused_ordering(762) 00:14:12.689 fused_ordering(763) 00:14:12.689 fused_ordering(764) 00:14:12.689 fused_ordering(765) 00:14:12.689 fused_ordering(766) 00:14:12.689 fused_ordering(767) 00:14:12.689 fused_ordering(768) 00:14:12.689 fused_ordering(769) 00:14:12.689 fused_ordering(770) 00:14:12.689 fused_ordering(771) 00:14:12.689 fused_ordering(772) 00:14:12.689 fused_ordering(773) 00:14:12.689 fused_ordering(774) 00:14:12.689 fused_ordering(775) 00:14:12.689 fused_ordering(776) 00:14:12.689 fused_ordering(777) 00:14:12.689 fused_ordering(778) 00:14:12.689 fused_ordering(779) 00:14:12.689 fused_ordering(780) 00:14:12.689 fused_ordering(781) 00:14:12.689 fused_ordering(782) 00:14:12.689 fused_ordering(783) 00:14:12.689 fused_ordering(784) 00:14:12.689 fused_ordering(785) 00:14:12.689 fused_ordering(786) 00:14:12.689 fused_ordering(787) 00:14:12.689 fused_ordering(788) 00:14:12.689 fused_ordering(789) 00:14:12.689 fused_ordering(790) 00:14:12.689 fused_ordering(791) 00:14:12.689 fused_ordering(792) 00:14:12.689 fused_ordering(793) 00:14:12.689 fused_ordering(794) 00:14:12.689 fused_ordering(795) 00:14:12.689 fused_ordering(796) 00:14:12.689 fused_ordering(797) 00:14:12.689 fused_ordering(798) 00:14:12.689 fused_ordering(799) 00:14:12.689 fused_ordering(800) 00:14:12.689 fused_ordering(801) 00:14:12.689 fused_ordering(802) 00:14:12.689 fused_ordering(803) 00:14:12.689 fused_ordering(804) 00:14:12.689 fused_ordering(805) 00:14:12.689 fused_ordering(806) 00:14:12.689 fused_ordering(807) 00:14:12.689 fused_ordering(808) 00:14:12.689 fused_ordering(809) 00:14:12.689 fused_ordering(810) 00:14:12.689 fused_ordering(811) 00:14:12.689 fused_ordering(812) 00:14:12.689 fused_ordering(813) 00:14:12.689 fused_ordering(814) 00:14:12.689 fused_ordering(815) 00:14:12.689 fused_ordering(816) 00:14:12.689 fused_ordering(817) 00:14:12.689 fused_ordering(818) 00:14:12.689 fused_ordering(819) 00:14:12.689 fused_ordering(820) 00:14:13.255 fused_ordering(821) 00:14:13.255 fused_ordering(822) 00:14:13.255 fused_ordering(823) 00:14:13.255 fused_ordering(824) 00:14:13.255 fused_ordering(825) 00:14:13.255 fused_ordering(826) 00:14:13.255 fused_ordering(827) 00:14:13.255 fused_ordering(828) 00:14:13.255 fused_ordering(829) 00:14:13.255 fused_ordering(830) 00:14:13.255 fused_ordering(831) 00:14:13.255 fused_ordering(832) 00:14:13.255 fused_ordering(833) 00:14:13.255 fused_ordering(834) 00:14:13.255 fused_ordering(835) 00:14:13.255 fused_ordering(836) 00:14:13.255 fused_ordering(837) 00:14:13.255 fused_ordering(838) 00:14:13.255 fused_ordering(839) 00:14:13.255 fused_ordering(840) 00:14:13.255 fused_ordering(841) 00:14:13.255 fused_ordering(842) 00:14:13.255 fused_ordering(843) 00:14:13.255 fused_ordering(844) 00:14:13.255 fused_ordering(845) 00:14:13.255 fused_ordering(846) 00:14:13.255 fused_ordering(847) 00:14:13.255 fused_ordering(848) 00:14:13.255 fused_ordering(849) 00:14:13.255 fused_ordering(850) 00:14:13.255 fused_ordering(851) 00:14:13.255 fused_ordering(852) 00:14:13.255 fused_ordering(853) 00:14:13.255 fused_ordering(854) 00:14:13.255 fused_ordering(855) 00:14:13.255 fused_ordering(856) 00:14:13.255 fused_ordering(857) 00:14:13.255 fused_ordering(858) 00:14:13.255 fused_ordering(859) 00:14:13.255 fused_ordering(860) 00:14:13.255 fused_ordering(861) 00:14:13.255 fused_ordering(862) 00:14:13.255 fused_ordering(863) 00:14:13.255 fused_ordering(864) 00:14:13.255 fused_ordering(865) 00:14:13.255 fused_ordering(866) 00:14:13.255 fused_ordering(867) 00:14:13.255 fused_ordering(868) 00:14:13.255 fused_ordering(869) 00:14:13.255 fused_ordering(870) 00:14:13.255 fused_ordering(871) 00:14:13.255 fused_ordering(872) 00:14:13.255 fused_ordering(873) 00:14:13.255 fused_ordering(874) 00:14:13.255 fused_ordering(875) 00:14:13.255 fused_ordering(876) 00:14:13.255 fused_ordering(877) 00:14:13.255 fused_ordering(878) 00:14:13.255 fused_ordering(879) 00:14:13.255 fused_ordering(880) 00:14:13.255 fused_ordering(881) 00:14:13.255 fused_ordering(882) 00:14:13.255 fused_ordering(883) 00:14:13.255 fused_ordering(884) 00:14:13.255 fused_ordering(885) 00:14:13.255 fused_ordering(886) 00:14:13.255 fused_ordering(887) 00:14:13.255 fused_ordering(888) 00:14:13.255 fused_ordering(889) 00:14:13.255 fused_ordering(890) 00:14:13.255 fused_ordering(891) 00:14:13.255 fused_ordering(892) 00:14:13.255 fused_ordering(893) 00:14:13.255 fused_ordering(894) 00:14:13.255 fused_ordering(895) 00:14:13.255 fused_ordering(896) 00:14:13.255 fused_ordering(897) 00:14:13.255 fused_ordering(898) 00:14:13.255 fused_ordering(899) 00:14:13.255 fused_ordering(900) 00:14:13.255 fused_ordering(901) 00:14:13.255 fused_ordering(902) 00:14:13.255 fused_ordering(903) 00:14:13.255 fused_ordering(904) 00:14:13.255 fused_ordering(905) 00:14:13.255 fused_ordering(906) 00:14:13.255 fused_ordering(907) 00:14:13.255 fused_ordering(908) 00:14:13.255 fused_ordering(909) 00:14:13.255 fused_ordering(910) 00:14:13.255 fused_ordering(911) 00:14:13.255 fused_ordering(912) 00:14:13.255 fused_ordering(913) 00:14:13.255 fused_ordering(914) 00:14:13.255 fused_ordering(915) 00:14:13.255 fused_ordering(916) 00:14:13.255 fused_ordering(917) 00:14:13.255 fused_ordering(918) 00:14:13.255 fused_ordering(919) 00:14:13.255 fused_ordering(920) 00:14:13.255 fused_ordering(921) 00:14:13.255 fused_ordering(922) 00:14:13.255 fused_ordering(923) 00:14:13.255 fused_ordering(924) 00:14:13.255 fused_ordering(925) 00:14:13.255 fused_ordering(926) 00:14:13.255 fused_ordering(927) 00:14:13.255 fused_ordering(928) 00:14:13.255 fused_ordering(929) 00:14:13.255 fused_ordering(930) 00:14:13.255 fused_ordering(931) 00:14:13.255 fused_ordering(932) 00:14:13.255 fused_ordering(933) 00:14:13.255 fused_ordering(934) 00:14:13.255 fused_ordering(935) 00:14:13.255 fused_ordering(936) 00:14:13.255 fused_ordering(937) 00:14:13.255 fused_ordering(938) 00:14:13.255 fused_ordering(939) 00:14:13.255 fused_ordering(940) 00:14:13.255 fused_ordering(941) 00:14:13.255 fused_ordering(942) 00:14:13.255 fused_ordering(943) 00:14:13.255 fused_ordering(944) 00:14:13.255 fused_ordering(945) 00:14:13.255 fused_ordering(946) 00:14:13.255 fused_ordering(947) 00:14:13.255 fused_ordering(948) 00:14:13.255 fused_ordering(949) 00:14:13.255 fused_ordering(950) 00:14:13.255 fused_ordering(951) 00:14:13.255 fused_ordering(952) 00:14:13.255 fused_ordering(953) 00:14:13.255 fused_ordering(954) 00:14:13.255 fused_ordering(955) 00:14:13.255 fused_ordering(956) 00:14:13.255 fused_ordering(957) 00:14:13.255 fused_ordering(958) 00:14:13.255 fused_ordering(959) 00:14:13.255 fused_ordering(960) 00:14:13.255 fused_ordering(961) 00:14:13.255 fused_ordering(962) 00:14:13.255 fused_ordering(963) 00:14:13.255 fused_ordering(964) 00:14:13.255 fused_ordering(965) 00:14:13.255 fused_ordering(966) 00:14:13.255 fused_ordering(967) 00:14:13.255 fused_ordering(968) 00:14:13.255 fused_ordering(969) 00:14:13.255 fused_ordering(970) 00:14:13.255 fused_ordering(971) 00:14:13.255 fused_ordering(972) 00:14:13.255 fused_ordering(973) 00:14:13.255 fused_ordering(974) 00:14:13.255 fused_ordering(975) 00:14:13.255 fused_ordering(976) 00:14:13.255 fused_ordering(977) 00:14:13.255 fused_ordering(978) 00:14:13.255 fused_ordering(979) 00:14:13.255 fused_ordering(980) 00:14:13.255 fused_ordering(981) 00:14:13.255 fused_ordering(982) 00:14:13.256 fused_ordering(983) 00:14:13.256 fused_ordering(984) 00:14:13.256 fused_ordering(985) 00:14:13.256 fused_ordering(986) 00:14:13.256 fused_ordering(987) 00:14:13.256 fused_ordering(988) 00:14:13.256 fused_ordering(989) 00:14:13.256 fused_ordering(990) 00:14:13.256 fused_ordering(991) 00:14:13.256 fused_ordering(992) 00:14:13.256 fused_ordering(993) 00:14:13.256 fused_ordering(994) 00:14:13.256 fused_ordering(995) 00:14:13.256 fused_ordering(996) 00:14:13.256 fused_ordering(997) 00:14:13.256 fused_ordering(998) 00:14:13.256 fused_ordering(999) 00:14:13.256 fused_ordering(1000) 00:14:13.256 fused_ordering(1001) 00:14:13.256 fused_ordering(1002) 00:14:13.256 fused_ordering(1003) 00:14:13.256 fused_ordering(1004) 00:14:13.256 fused_ordering(1005) 00:14:13.256 fused_ordering(1006) 00:14:13.256 fused_ordering(1007) 00:14:13.256 fused_ordering(1008) 00:14:13.256 fused_ordering(1009) 00:14:13.256 fused_ordering(1010) 00:14:13.256 fused_ordering(1011) 00:14:13.256 fused_ordering(1012) 00:14:13.256 fused_ordering(1013) 00:14:13.256 fused_ordering(1014) 00:14:13.256 fused_ordering(1015) 00:14:13.256 fused_ordering(1016) 00:14:13.256 fused_ordering(1017) 00:14:13.256 fused_ordering(1018) 00:14:13.256 fused_ordering(1019) 00:14:13.256 fused_ordering(1020) 00:14:13.256 fused_ordering(1021) 00:14:13.256 fused_ordering(1022) 00:14:13.256 fused_ordering(1023) 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.256 rmmod nvme_tcp 00:14:13.256 rmmod nvme_fabrics 00:14:13.256 rmmod nvme_keyring 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1084333 ']' 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1084333 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1084333 ']' 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1084333 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1084333 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1084333' 00:14:13.256 killing process with pid 1084333 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1084333 00:14:13.256 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1084333 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.515 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:16.055 00:14:16.055 real 0m7.467s 00:14:16.055 user 0m5.062s 00:14:16.055 sys 0m3.101s 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.055 ************************************ 00:14:16.055 END TEST nvmf_fused_ordering 00:14:16.055 ************************************ 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.055 ************************************ 00:14:16.055 START TEST nvmf_ns_masking 00:14:16.055 ************************************ 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:16.055 * Looking for test storage... 00:14:16.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:16.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.055 --rc genhtml_branch_coverage=1 00:14:16.055 --rc genhtml_function_coverage=1 00:14:16.055 --rc genhtml_legend=1 00:14:16.055 --rc geninfo_all_blocks=1 00:14:16.055 --rc geninfo_unexecuted_blocks=1 00:14:16.055 00:14:16.055 ' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:16.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.055 --rc genhtml_branch_coverage=1 00:14:16.055 --rc genhtml_function_coverage=1 00:14:16.055 --rc genhtml_legend=1 00:14:16.055 --rc geninfo_all_blocks=1 00:14:16.055 --rc geninfo_unexecuted_blocks=1 00:14:16.055 00:14:16.055 ' 00:14:16.055 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:16.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.055 --rc genhtml_branch_coverage=1 00:14:16.056 --rc genhtml_function_coverage=1 00:14:16.056 --rc genhtml_legend=1 00:14:16.056 --rc geninfo_all_blocks=1 00:14:16.056 --rc geninfo_unexecuted_blocks=1 00:14:16.056 00:14:16.056 ' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:16.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.056 --rc genhtml_branch_coverage=1 00:14:16.056 --rc genhtml_function_coverage=1 00:14:16.056 --rc genhtml_legend=1 00:14:16.056 --rc geninfo_all_blocks=1 00:14:16.056 --rc geninfo_unexecuted_blocks=1 00:14:16.056 00:14:16.056 ' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=95eaeec9-48f2-4b22-bd2f-d2e17081bd30 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=28b2e23c-e988-4a2c-acc6-0a1b99e964fe 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=22e9da76-3cd2-4d42-acde-a9f69dc104f3 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:16.056 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:17.959 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.960 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.960 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.960 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:17.960 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:14:18.219 00:14:18.219 --- 10.0.0.2 ping statistics --- 00:14:18.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.219 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:14:18.219 00:14:18.219 --- 10.0.0.1 ping statistics --- 00:14:18.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.219 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1086676 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1086676 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1086676 ']' 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.219 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.219 [2024-12-06 19:12:28.619271] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:18.219 [2024-12-06 19:12:28.619355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.219 [2024-12-06 19:12:28.687368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.219 [2024-12-06 19:12:28.741060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.219 [2024-12-06 19:12:28.741121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.219 [2024-12-06 19:12:28.741149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.219 [2024-12-06 19:12:28.741160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.219 [2024-12-06 19:12:28.741169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.219 [2024-12-06 19:12:28.741788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.478 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.736 [2024-12-06 19:12:29.136123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.736 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:18.736 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:18.736 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.995 Malloc1 00:14:18.995 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:19.274 Malloc2 00:14:19.274 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.594 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:19.878 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.137 [2024-12-06 19:12:30.704319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22e9da76-3cd2-4d42-acde-a9f69dc104f3 -a 10.0.0.2 -s 4420 -i 4 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:20.398 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:22.939 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.939 [ 0]:0x1 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=112eb894a50648cda2605537e9a99355 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 112eb894a50648cda2605537e9a99355 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.939 [ 0]:0x1 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=112eb894a50648cda2605537e9a99355 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 112eb894a50648cda2605537e9a99355 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:22.939 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.940 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.940 [ 1]:0x2 00:14:22.940 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.940 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.198 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:23.198 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.198 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:23.198 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.198 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.764 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:23.764 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:23.764 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22e9da76-3cd2-4d42-acde-a9f69dc104f3 -a 10.0.0.2 -s 4420 -i 4 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:24.023 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.558 [ 0]:0x2 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.558 [ 0]:0x1 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=112eb894a50648cda2605537e9a99355 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 112eb894a50648cda2605537e9a99355 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.558 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:26.559 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.559 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.559 [ 1]:0x2 00:14:26.559 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.559 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.559 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:26.559 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.559 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.816 [ 0]:0x2 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:26.816 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.074 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22e9da76-3cd2-4d42-acde-a9f69dc104f3 -a 10.0.0.2 -s 4420 -i 4 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:27.332 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.871 [ 0]:0x1 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=112eb894a50648cda2605537e9a99355 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 112eb894a50648cda2605537e9a99355 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.871 [ 1]:0x2 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.871 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.871 [ 0]:0x2 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.871 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.132 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:30.132 [2024-12-06 19:12:40.698446] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:30.132 request: 00:14:30.132 { 00:14:30.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.132 "nsid": 2, 00:14:30.132 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.132 "method": "nvmf_ns_remove_host", 00:14:30.132 "req_id": 1 00:14:30.132 } 00:14:30.132 Got JSON-RPC error response 00:14:30.132 response: 00:14:30.132 { 00:14:30.132 "code": -32602, 00:14:30.132 "message": "Invalid parameters" 00:14:30.132 } 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:30.390 [ 0]:0x2 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66d610964ac14f44a6a6c8c3c96ade0f 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66d610964ac14f44a6a6c8c3c96ade0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.390 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1088307 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1088307 /var/tmp/host.sock 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1088307 ']' 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:30.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.391 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.391 [2024-12-06 19:12:40.894024] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:30.391 [2024-12-06 19:12:40.894110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088307 ] 00:14:30.391 [2024-12-06 19:12:40.958474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.650 [2024-12-06 19:12:41.016587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.908 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.908 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:30.908 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.167 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.425 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 95eaeec9-48f2-4b22-bd2f-d2e17081bd30 00:14:31.425 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.425 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 95EAEEC948F24B22BD2FD2E17081BD30 -i 00:14:31.683 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 28b2e23c-e988-4a2c-acc6-0a1b99e964fe 00:14:31.683 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.683 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 28B2E23CE9884A2CACC60A1B99E964FE -i 00:14:31.942 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.200 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:32.459 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:32.459 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:33.028 nvme0n1 00:14:33.028 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:33.028 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:33.287 nvme1n2 00:14:33.546 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:33.546 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:33.546 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:33.546 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:33.546 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:33.805 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:33.805 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:33.805 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:33.805 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:34.064 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 95eaeec9-48f2-4b22-bd2f-d2e17081bd30 == \9\5\e\a\e\e\c\9\-\4\8\f\2\-\4\b\2\2\-\b\d\2\f\-\d\2\e\1\7\0\8\1\b\d\3\0 ]] 00:14:34.064 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:34.064 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:34.064 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:34.323 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 28b2e23c-e988-4a2c-acc6-0a1b99e964fe == \2\8\b\2\e\2\3\c\-\e\9\8\8\-\4\a\2\c\-\a\c\c\6\-\0\a\1\b\9\9\e\9\6\4\f\e ]] 00:14:34.323 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.587 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 95eaeec9-48f2-4b22-bd2f-d2e17081bd30 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95EAEEC948F24B22BD2FD2E17081BD30 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95EAEEC948F24B22BD2FD2E17081BD30 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:34.852 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 95EAEEC948F24B22BD2FD2E17081BD30 00:14:35.109 [2024-12-06 19:12:45.508269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:35.109 [2024-12-06 19:12:45.508306] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:35.109 [2024-12-06 19:12:45.508336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.109 request: 00:14:35.109 { 00:14:35.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.109 "namespace": { 00:14:35.109 "bdev_name": "invalid", 00:14:35.109 "nsid": 1, 00:14:35.109 "nguid": "95EAEEC948F24B22BD2FD2E17081BD30", 00:14:35.109 "no_auto_visible": false, 00:14:35.109 "hide_metadata": false 00:14:35.109 }, 00:14:35.109 "method": "nvmf_subsystem_add_ns", 00:14:35.109 "req_id": 1 00:14:35.109 } 00:14:35.109 Got JSON-RPC error response 00:14:35.109 response: 00:14:35.109 { 00:14:35.109 "code": -32602, 00:14:35.109 "message": "Invalid parameters" 00:14:35.109 } 00:14:35.109 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:35.109 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.109 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.110 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.110 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 95eaeec9-48f2-4b22-bd2f-d2e17081bd30 00:14:35.110 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:35.110 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 95EAEEC948F24B22BD2FD2E17081BD30 -i 00:14:35.366 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:37.283 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:37.283 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:37.283 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1088307 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1088307 ']' 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1088307 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1088307 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1088307' 00:14:37.541 killing process with pid 1088307 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1088307 00:14:37.541 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1088307 00:14:38.106 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.364 rmmod nvme_tcp 00:14:38.364 rmmod nvme_fabrics 00:14:38.364 rmmod nvme_keyring 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1086676 ']' 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1086676 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1086676 ']' 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1086676 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086676 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086676' 00:14:38.364 killing process with pid 1086676 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1086676 00:14:38.364 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1086676 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.622 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.157 00:14:41.157 real 0m25.084s 00:14:41.157 user 0m36.661s 00:14:41.157 sys 0m4.573s 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.157 ************************************ 00:14:41.157 END TEST nvmf_ns_masking 00:14:41.157 ************************************ 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.157 ************************************ 00:14:41.157 START TEST nvmf_nvme_cli 00:14:41.157 ************************************ 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.157 * Looking for test storage... 00:14:41.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:41.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.157 --rc genhtml_branch_coverage=1 00:14:41.157 --rc genhtml_function_coverage=1 00:14:41.157 --rc genhtml_legend=1 00:14:41.157 --rc geninfo_all_blocks=1 00:14:41.157 --rc geninfo_unexecuted_blocks=1 00:14:41.157 00:14:41.157 ' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:41.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.157 --rc genhtml_branch_coverage=1 00:14:41.157 --rc genhtml_function_coverage=1 00:14:41.157 --rc genhtml_legend=1 00:14:41.157 --rc geninfo_all_blocks=1 00:14:41.157 --rc geninfo_unexecuted_blocks=1 00:14:41.157 00:14:41.157 ' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:41.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.157 --rc genhtml_branch_coverage=1 00:14:41.157 --rc genhtml_function_coverage=1 00:14:41.157 --rc genhtml_legend=1 00:14:41.157 --rc geninfo_all_blocks=1 00:14:41.157 --rc geninfo_unexecuted_blocks=1 00:14:41.157 00:14:41.157 ' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:41.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.157 --rc genhtml_branch_coverage=1 00:14:41.157 --rc genhtml_function_coverage=1 00:14:41.157 --rc genhtml_legend=1 00:14:41.157 --rc geninfo_all_blocks=1 00:14:41.157 --rc geninfo_unexecuted_blocks=1 00:14:41.157 00:14:41.157 ' 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.157 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.158 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:43.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.064 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:43.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:43.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:43.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.065 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:14:43.324 00:14:43.324 --- 10.0.0.2 ping statistics --- 00:14:43.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.324 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:43.324 00:14:43.324 --- 10.0.0.1 ping statistics --- 00:14:43.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.324 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1091215 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1091215 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1091215 ']' 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.324 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.324 [2024-12-06 19:12:53.767584] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:43.324 [2024-12-06 19:12:53.767677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.324 [2024-12-06 19:12:53.837937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.324 [2024-12-06 19:12:53.894125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.324 [2024-12-06 19:12:53.894179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.324 [2024-12-06 19:12:53.894207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.324 [2024-12-06 19:12:53.894218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.324 [2024-12-06 19:12:53.894227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.324 [2024-12-06 19:12:53.896075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.324 [2024-12-06 19:12:53.896137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.324 [2024-12-06 19:12:53.898914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.324 [2024-12-06 19:12:53.898927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.582 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 [2024-12-06 19:12:54.045935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 Malloc0 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 Malloc1 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 [2024-12-06 19:12:54.142613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.583 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:43.841 00:14:43.841 Discovery Log Number of Records 2, Generation counter 2 00:14:43.841 =====Discovery Log Entry 0====== 00:14:43.841 trtype: tcp 00:14:43.841 adrfam: ipv4 00:14:43.841 subtype: current discovery subsystem 00:14:43.841 treq: not required 00:14:43.841 portid: 0 00:14:43.841 trsvcid: 4420 00:14:43.841 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:43.841 traddr: 10.0.0.2 00:14:43.841 eflags: explicit discovery connections, duplicate discovery information 00:14:43.841 sectype: none 00:14:43.841 =====Discovery Log Entry 1====== 00:14:43.841 trtype: tcp 00:14:43.841 adrfam: ipv4 00:14:43.841 subtype: nvme subsystem 00:14:43.841 treq: not required 00:14:43.841 portid: 0 00:14:43.841 trsvcid: 4420 00:14:43.841 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:43.841 traddr: 10.0.0.2 00:14:43.841 eflags: none 00:14:43.841 sectype: none 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:43.841 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:44.781 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:46.684 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:46.685 /dev/nvme0n2 ]] 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.685 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:46.943 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.201 rmmod nvme_tcp 00:14:47.201 rmmod nvme_fabrics 00:14:47.201 rmmod nvme_keyring 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1091215 ']' 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1091215 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1091215 ']' 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1091215 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091215 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.201 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.202 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091215' 00:14:47.202 killing process with pid 1091215 00:14:47.202 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1091215 00:14:47.202 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1091215 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.461 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:49.999 00:14:49.999 real 0m8.830s 00:14:49.999 user 0m17.044s 00:14:49.999 sys 0m2.340s 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.999 ************************************ 00:14:49.999 END TEST nvmf_nvme_cli 00:14:49.999 ************************************ 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.999 ************************************ 00:14:49.999 START TEST nvmf_vfio_user 00:14:49.999 ************************************ 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.999 * Looking for test storage... 00:14:49.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.999 --rc genhtml_branch_coverage=1 00:14:49.999 --rc genhtml_function_coverage=1 00:14:49.999 --rc genhtml_legend=1 00:14:49.999 --rc geninfo_all_blocks=1 00:14:49.999 --rc geninfo_unexecuted_blocks=1 00:14:49.999 00:14:49.999 ' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.999 --rc genhtml_branch_coverage=1 00:14:49.999 --rc genhtml_function_coverage=1 00:14:49.999 --rc genhtml_legend=1 00:14:49.999 --rc geninfo_all_blocks=1 00:14:49.999 --rc geninfo_unexecuted_blocks=1 00:14:49.999 00:14:49.999 ' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.999 --rc genhtml_branch_coverage=1 00:14:49.999 --rc genhtml_function_coverage=1 00:14:49.999 --rc genhtml_legend=1 00:14:49.999 --rc geninfo_all_blocks=1 00:14:49.999 --rc geninfo_unexecuted_blocks=1 00:14:49.999 00:14:49.999 ' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.999 --rc genhtml_branch_coverage=1 00:14:49.999 --rc genhtml_function_coverage=1 00:14:49.999 --rc genhtml_legend=1 00:14:49.999 --rc geninfo_all_blocks=1 00:14:49.999 --rc geninfo_unexecuted_blocks=1 00:14:49.999 00:14:49.999 ' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.999 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1092156 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1092156' 00:14:50.000 Process pid: 1092156 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1092156 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1092156 ']' 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.000 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:50.000 [2024-12-06 19:13:00.349251] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:50.000 [2024-12-06 19:13:00.349336] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.000 [2024-12-06 19:13:00.418657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.000 [2024-12-06 19:13:00.476819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.000 [2024-12-06 19:13:00.476873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.000 [2024-12-06 19:13:00.476901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.000 [2024-12-06 19:13:00.476912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.000 [2024-12-06 19:13:00.476923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.000 [2024-12-06 19:13:00.478350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.000 [2024-12-06 19:13:00.478464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.000 [2024-12-06 19:13:00.478572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.000 [2024-12-06 19:13:00.478581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.258 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.258 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:50.258 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:51.190 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:51.448 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:51.448 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:51.448 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.448 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:51.448 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.706 Malloc1 00:14:51.706 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:51.989 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:52.247 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:52.504 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.505 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:52.505 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.762 Malloc2 00:14:52.762 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:53.019 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:53.276 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:53.843 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:53.843 [2024-12-06 19:13:04.144007] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:53.843 [2024-12-06 19:13:04.144048] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092578 ] 00:14:53.843 [2024-12-06 19:13:04.194858] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:53.844 [2024-12-06 19:13:04.200436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.844 [2024-12-06 19:13:04.200469] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f26c0b5d000 00:14:53.844 [2024-12-06 19:13:04.201432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.202425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.203438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.204440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.205444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.206443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.207451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.208456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.844 [2024-12-06 19:13:04.209463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.844 [2024-12-06 19:13:04.209483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f26c0b52000 00:14:53.844 [2024-12-06 19:13:04.210600] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.844 [2024-12-06 19:13:04.226258] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:53.844 [2024-12-06 19:13:04.226304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:53.844 [2024-12-06 19:13:04.228569] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.844 [2024-12-06 19:13:04.228629] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:53.844 [2024-12-06 19:13:04.228755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:53.844 [2024-12-06 19:13:04.228789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:53.844 [2024-12-06 19:13:04.228801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:53.844 [2024-12-06 19:13:04.229561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:53.844 [2024-12-06 19:13:04.229587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:53.844 [2024-12-06 19:13:04.229607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:53.844 [2024-12-06 19:13:04.230581] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.844 [2024-12-06 19:13:04.230603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:53.844 [2024-12-06 19:13:04.230617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.231572] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:53.844 [2024-12-06 19:13:04.231591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.232579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:53.844 [2024-12-06 19:13:04.232598] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:53.844 [2024-12-06 19:13:04.232606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.232617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.232727] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:53.844 [2024-12-06 19:13:04.232738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.232747] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:53.844 [2024-12-06 19:13:04.233583] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:53.844 [2024-12-06 19:13:04.234588] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:53.844 [2024-12-06 19:13:04.235595] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.844 [2024-12-06 19:13:04.236588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.844 [2024-12-06 19:13:04.236753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.844 [2024-12-06 19:13:04.237607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:53.844 [2024-12-06 19:13:04.237625] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.844 [2024-12-06 19:13:04.237634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.237679] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:53.844 [2024-12-06 19:13:04.237695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.237742] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.844 [2024-12-06 19:13:04.237757] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.844 [2024-12-06 19:13:04.237764] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.844 [2024-12-06 19:13:04.237785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.844 [2024-12-06 19:13:04.237867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:53.844 [2024-12-06 19:13:04.237886] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:53.844 [2024-12-06 19:13:04.237895] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:53.844 [2024-12-06 19:13:04.237902] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:53.844 [2024-12-06 19:13:04.237911] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:53.844 [2024-12-06 19:13:04.237918] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:53.844 [2024-12-06 19:13:04.237926] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:53.844 [2024-12-06 19:13:04.237934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.237946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.237981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:53.844 [2024-12-06 19:13:04.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:53.844 [2024-12-06 19:13:04.238033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.844 [2024-12-06 19:13:04.238045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.844 [2024-12-06 19:13:04.238056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.844 [2024-12-06 19:13:04.238067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.844 [2024-12-06 19:13:04.238075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:53.844 [2024-12-06 19:13:04.238117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:53.844 [2024-12-06 19:13:04.238127] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:53.844 [2024-12-06 19:13:04.238136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.844 [2024-12-06 19:13:04.238196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:53.844 [2024-12-06 19:13:04.238262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.844 [2024-12-06 19:13:04.238293] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:53.845 [2024-12-06 19:13:04.238301] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:53.845 [2024-12-06 19:13:04.238307] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.238316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238356] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:53.845 [2024-12-06 19:13:04.238377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238405] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.845 [2024-12-06 19:13:04.238412] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.845 [2024-12-06 19:13:04.238418] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.238427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.845 [2024-12-06 19:13:04.238521] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.845 [2024-12-06 19:13:04.238527] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.238535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238631] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.845 [2024-12-06 19:13:04.238639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:53.845 [2024-12-06 19:13:04.238662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:53.845 [2024-12-06 19:13:04.238703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.238844] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:53.845 [2024-12-06 19:13:04.238854] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:53.845 [2024-12-06 19:13:04.238861] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:53.845 [2024-12-06 19:13:04.238867] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:53.845 [2024-12-06 19:13:04.238873] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:53.845 [2024-12-06 19:13:04.238882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:53.845 [2024-12-06 19:13:04.238894] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:53.845 [2024-12-06 19:13:04.238903] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:53.845 [2024-12-06 19:13:04.238909] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.238917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238929] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:53.845 [2024-12-06 19:13:04.238940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.845 [2024-12-06 19:13:04.238947] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.238974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.238986] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:53.845 [2024-12-06 19:13:04.238994] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:53.845 [2024-12-06 19:13:04.239000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.845 [2024-12-06 19:13:04.239008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:53.845 [2024-12-06 19:13:04.239034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.239063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.239081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:53.845 [2024-12-06 19:13:04.239093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:53.845 ===================================================== 00:14:53.845 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.845 ===================================================== 00:14:53.845 Controller Capabilities/Features 00:14:53.845 ================================ 00:14:53.845 Vendor ID: 4e58 00:14:53.845 Subsystem Vendor ID: 4e58 00:14:53.845 Serial Number: SPDK1 00:14:53.845 Model Number: SPDK bdev Controller 00:14:53.845 Firmware Version: 25.01 00:14:53.845 Recommended Arb Burst: 6 00:14:53.845 IEEE OUI Identifier: 8d 6b 50 00:14:53.845 Multi-path I/O 00:14:53.845 May have multiple subsystem ports: Yes 00:14:53.845 May have multiple controllers: Yes 00:14:53.845 Associated with SR-IOV VF: No 00:14:53.845 Max Data Transfer Size: 131072 00:14:53.845 Max Number of Namespaces: 32 00:14:53.845 Max Number of I/O Queues: 127 00:14:53.845 NVMe Specification Version (VS): 1.3 00:14:53.845 NVMe Specification Version (Identify): 1.3 00:14:53.845 Maximum Queue Entries: 256 00:14:53.845 Contiguous Queues Required: Yes 00:14:53.845 Arbitration Mechanisms Supported 00:14:53.845 Weighted Round Robin: Not Supported 00:14:53.845 Vendor Specific: Not Supported 00:14:53.845 Reset Timeout: 15000 ms 00:14:53.845 Doorbell Stride: 4 bytes 00:14:53.845 NVM Subsystem Reset: Not Supported 00:14:53.845 Command Sets Supported 00:14:53.845 NVM Command Set: Supported 00:14:53.845 Boot Partition: Not Supported 00:14:53.845 Memory Page Size Minimum: 4096 bytes 00:14:53.845 Memory Page Size Maximum: 4096 bytes 00:14:53.845 Persistent Memory Region: Not Supported 00:14:53.845 Optional Asynchronous Events Supported 00:14:53.845 Namespace Attribute Notices: Supported 00:14:53.845 Firmware Activation Notices: Not Supported 00:14:53.845 ANA Change Notices: Not Supported 00:14:53.845 PLE Aggregate Log Change Notices: Not Supported 00:14:53.845 LBA Status Info Alert Notices: Not Supported 00:14:53.845 EGE Aggregate Log Change Notices: Not Supported 00:14:53.845 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.845 Zone Descriptor Change Notices: Not Supported 00:14:53.845 Discovery Log Change Notices: Not Supported 00:14:53.845 Controller Attributes 00:14:53.845 128-bit Host Identifier: Supported 00:14:53.845 Non-Operational Permissive Mode: Not Supported 00:14:53.845 NVM Sets: Not Supported 00:14:53.845 Read Recovery Levels: Not Supported 00:14:53.845 Endurance Groups: Not Supported 00:14:53.845 Predictable Latency Mode: Not Supported 00:14:53.845 Traffic Based Keep ALive: Not Supported 00:14:53.845 Namespace Granularity: Not Supported 00:14:53.845 SQ Associations: Not Supported 00:14:53.845 UUID List: Not Supported 00:14:53.845 Multi-Domain Subsystem: Not Supported 00:14:53.845 Fixed Capacity Management: Not Supported 00:14:53.846 Variable Capacity Management: Not Supported 00:14:53.846 Delete Endurance Group: Not Supported 00:14:53.846 Delete NVM Set: Not Supported 00:14:53.846 Extended LBA Formats Supported: Not Supported 00:14:53.846 Flexible Data Placement Supported: Not Supported 00:14:53.846 00:14:53.846 Controller Memory Buffer Support 00:14:53.846 ================================ 00:14:53.846 Supported: No 00:14:53.846 00:14:53.846 Persistent Memory Region Support 00:14:53.846 ================================ 00:14:53.846 Supported: No 00:14:53.846 00:14:53.846 Admin Command Set Attributes 00:14:53.846 ============================ 00:14:53.846 Security Send/Receive: Not Supported 00:14:53.846 Format NVM: Not Supported 00:14:53.846 Firmware Activate/Download: Not Supported 00:14:53.846 Namespace Management: Not Supported 00:14:53.846 Device Self-Test: Not Supported 00:14:53.846 Directives: Not Supported 00:14:53.846 NVMe-MI: Not Supported 00:14:53.846 Virtualization Management: Not Supported 00:14:53.846 Doorbell Buffer Config: Not Supported 00:14:53.846 Get LBA Status Capability: Not Supported 00:14:53.846 Command & Feature Lockdown Capability: Not Supported 00:14:53.846 Abort Command Limit: 4 00:14:53.846 Async Event Request Limit: 4 00:14:53.846 Number of Firmware Slots: N/A 00:14:53.846 Firmware Slot 1 Read-Only: N/A 00:14:53.846 Firmware Activation Without Reset: N/A 00:14:53.846 Multiple Update Detection Support: N/A 00:14:53.846 Firmware Update Granularity: No Information Provided 00:14:53.846 Per-Namespace SMART Log: No 00:14:53.846 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.846 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:53.846 Command Effects Log Page: Supported 00:14:53.846 Get Log Page Extended Data: Supported 00:14:53.846 Telemetry Log Pages: Not Supported 00:14:53.846 Persistent Event Log Pages: Not Supported 00:14:53.846 Supported Log Pages Log Page: May Support 00:14:53.846 Commands Supported & Effects Log Page: Not Supported 00:14:53.846 Feature Identifiers & Effects Log Page:May Support 00:14:53.846 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.846 Data Area 4 for Telemetry Log: Not Supported 00:14:53.846 Error Log Page Entries Supported: 128 00:14:53.846 Keep Alive: Supported 00:14:53.846 Keep Alive Granularity: 10000 ms 00:14:53.846 00:14:53.846 NVM Command Set Attributes 00:14:53.846 ========================== 00:14:53.846 Submission Queue Entry Size 00:14:53.846 Max: 64 00:14:53.846 Min: 64 00:14:53.846 Completion Queue Entry Size 00:14:53.846 Max: 16 00:14:53.846 Min: 16 00:14:53.846 Number of Namespaces: 32 00:14:53.846 Compare Command: Supported 00:14:53.846 Write Uncorrectable Command: Not Supported 00:14:53.846 Dataset Management Command: Supported 00:14:53.846 Write Zeroes Command: Supported 00:14:53.846 Set Features Save Field: Not Supported 00:14:53.846 Reservations: Not Supported 00:14:53.846 Timestamp: Not Supported 00:14:53.846 Copy: Supported 00:14:53.846 Volatile Write Cache: Present 00:14:53.846 Atomic Write Unit (Normal): 1 00:14:53.846 Atomic Write Unit (PFail): 1 00:14:53.846 Atomic Compare & Write Unit: 1 00:14:53.846 Fused Compare & Write: Supported 00:14:53.846 Scatter-Gather List 00:14:53.846 SGL Command Set: Supported (Dword aligned) 00:14:53.846 SGL Keyed: Not Supported 00:14:53.846 SGL Bit Bucket Descriptor: Not Supported 00:14:53.846 SGL Metadata Pointer: Not Supported 00:14:53.846 Oversized SGL: Not Supported 00:14:53.846 SGL Metadata Address: Not Supported 00:14:53.846 SGL Offset: Not Supported 00:14:53.846 Transport SGL Data Block: Not Supported 00:14:53.846 Replay Protected Memory Block: Not Supported 00:14:53.846 00:14:53.846 Firmware Slot Information 00:14:53.846 ========================= 00:14:53.846 Active slot: 1 00:14:53.846 Slot 1 Firmware Revision: 25.01 00:14:53.846 00:14:53.846 00:14:53.846 Commands Supported and Effects 00:14:53.846 ============================== 00:14:53.846 Admin Commands 00:14:53.846 -------------- 00:14:53.846 Get Log Page (02h): Supported 00:14:53.846 Identify (06h): Supported 00:14:53.846 Abort (08h): Supported 00:14:53.846 Set Features (09h): Supported 00:14:53.846 Get Features (0Ah): Supported 00:14:53.846 Asynchronous Event Request (0Ch): Supported 00:14:53.846 Keep Alive (18h): Supported 00:14:53.846 I/O Commands 00:14:53.846 ------------ 00:14:53.846 Flush (00h): Supported LBA-Change 00:14:53.846 Write (01h): Supported LBA-Change 00:14:53.846 Read (02h): Supported 00:14:53.846 Compare (05h): Supported 00:14:53.846 Write Zeroes (08h): Supported LBA-Change 00:14:53.846 Dataset Management (09h): Supported LBA-Change 00:14:53.846 Copy (19h): Supported LBA-Change 00:14:53.846 00:14:53.846 Error Log 00:14:53.846 ========= 00:14:53.846 00:14:53.846 Arbitration 00:14:53.846 =========== 00:14:53.846 Arbitration Burst: 1 00:14:53.846 00:14:53.846 Power Management 00:14:53.846 ================ 00:14:53.846 Number of Power States: 1 00:14:53.846 Current Power State: Power State #0 00:14:53.846 Power State #0: 00:14:53.846 Max Power: 0.00 W 00:14:53.846 Non-Operational State: Operational 00:14:53.846 Entry Latency: Not Reported 00:14:53.846 Exit Latency: Not Reported 00:14:53.846 Relative Read Throughput: 0 00:14:53.846 Relative Read Latency: 0 00:14:53.846 Relative Write Throughput: 0 00:14:53.846 Relative Write Latency: 0 00:14:53.846 Idle Power: Not Reported 00:14:53.846 Active Power: Not Reported 00:14:53.846 Non-Operational Permissive Mode: Not Supported 00:14:53.846 00:14:53.846 Health Information 00:14:53.846 ================== 00:14:53.846 Critical Warnings: 00:14:53.846 Available Spare Space: OK 00:14:53.846 Temperature: OK 00:14:53.846 Device Reliability: OK 00:14:53.846 Read Only: No 00:14:53.846 Volatile Memory Backup: OK 00:14:53.846 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.846 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.846 Available Spare: 0% 00:14:53.846 Available Sp[2024-12-06 19:13:04.239215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:53.846 [2024-12-06 19:13:04.239231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:53.846 [2024-12-06 19:13:04.239278] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:53.846 [2024-12-06 19:13:04.239297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.846 [2024-12-06 19:13:04.239308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.846 [2024-12-06 19:13:04.239317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.846 [2024-12-06 19:13:04.239326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.846 [2024-12-06 19:13:04.242678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.846 [2024-12-06 19:13:04.242703] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:53.846 [2024-12-06 19:13:04.243634] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.846 [2024-12-06 19:13:04.243744] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:53.846 [2024-12-06 19:13:04.243759] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:53.846 [2024-12-06 19:13:04.244660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:53.846 [2024-12-06 19:13:04.244693] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:53.846 [2024-12-06 19:13:04.244763] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:53.846 [2024-12-06 19:13:04.247684] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.846 are Threshold: 0% 00:14:53.846 Life Percentage Used: 0% 00:14:53.846 Data Units Read: 0 00:14:53.846 Data Units Written: 0 00:14:53.846 Host Read Commands: 0 00:14:53.846 Host Write Commands: 0 00:14:53.846 Controller Busy Time: 0 minutes 00:14:53.846 Power Cycles: 0 00:14:53.846 Power On Hours: 0 hours 00:14:53.846 Unsafe Shutdowns: 0 00:14:53.846 Unrecoverable Media Errors: 0 00:14:53.846 Lifetime Error Log Entries: 0 00:14:53.846 Warning Temperature Time: 0 minutes 00:14:53.846 Critical Temperature Time: 0 minutes 00:14:53.846 00:14:53.846 Number of Queues 00:14:53.846 ================ 00:14:53.846 Number of I/O Submission Queues: 127 00:14:53.846 Number of I/O Completion Queues: 127 00:14:53.846 00:14:53.846 Active Namespaces 00:14:53.846 ================= 00:14:53.846 Namespace ID:1 00:14:53.846 Error Recovery Timeout: Unlimited 00:14:53.846 Command Set Identifier: NVM (00h) 00:14:53.846 Deallocate: Supported 00:14:53.847 Deallocated/Unwritten Error: Not Supported 00:14:53.847 Deallocated Read Value: Unknown 00:14:53.847 Deallocate in Write Zeroes: Not Supported 00:14:53.847 Deallocated Guard Field: 0xFFFF 00:14:53.847 Flush: Supported 00:14:53.847 Reservation: Supported 00:14:53.847 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.847 Size (in LBAs): 131072 (0GiB) 00:14:53.847 Capacity (in LBAs): 131072 (0GiB) 00:14:53.847 Utilization (in LBAs): 131072 (0GiB) 00:14:53.847 NGUID: 8968673CD1914E2383E6174FE3B90331 00:14:53.847 UUID: 8968673c-d191-4e23-83e6-174fe3b90331 00:14:53.847 Thin Provisioning: Not Supported 00:14:53.847 Per-NS Atomic Units: Yes 00:14:53.847 Atomic Boundary Size (Normal): 0 00:14:53.847 Atomic Boundary Size (PFail): 0 00:14:53.847 Atomic Boundary Offset: 0 00:14:53.847 Maximum Single Source Range Length: 65535 00:14:53.847 Maximum Copy Length: 65535 00:14:53.847 Maximum Source Range Count: 1 00:14:53.847 NGUID/EUI64 Never Reused: No 00:14:53.847 Namespace Write Protected: No 00:14:53.847 Number of LBA Formats: 1 00:14:53.847 Current LBA Format: LBA Format #00 00:14:53.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.847 00:14:53.847 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:54.143 [2024-12-06 19:13:04.502582] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.421 Initializing NVMe Controllers 00:14:59.421 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.421 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:59.421 Initialization complete. Launching workers. 00:14:59.421 ======================================================== 00:14:59.421 Latency(us) 00:14:59.421 Device Information : IOPS MiB/s Average min max 00:14:59.421 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31380.80 122.58 4080.31 1202.06 10291.82 00:14:59.421 ======================================================== 00:14:59.421 Total : 31380.80 122.58 4080.31 1202.06 10291.82 00:14:59.421 00:14:59.421 [2024-12-06 19:13:09.525571] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.421 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:59.421 [2024-12-06 19:13:09.776820] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.683 Initializing NVMe Controllers 00:15:04.683 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.683 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:04.683 Initialization complete. Launching workers. 00:15:04.683 ======================================================== 00:15:04.683 Latency(us) 00:15:04.683 Device Information : IOPS MiB/s Average min max 00:15:04.683 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.81 6969.39 11976.72 00:15:04.683 ======================================================== 00:15:04.683 Total : 16051.20 62.70 7982.81 6969.39 11976.72 00:15:04.683 00:15:04.683 [2024-12-06 19:13:14.814932] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.683 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:04.683 [2024-12-06 19:13:15.044053] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.943 [2024-12-06 19:13:20.124028] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.943 Initializing NVMe Controllers 00:15:09.943 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.943 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:09.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:09.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:09.943 Initialization complete. Launching workers. 00:15:09.943 Starting thread on core 2 00:15:09.943 Starting thread on core 3 00:15:09.943 Starting thread on core 1 00:15:09.943 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:09.943 [2024-12-06 19:13:20.448181] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.222 [2024-12-06 19:13:23.527941] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.222 Initializing NVMe Controllers 00:15:13.222 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.222 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.222 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:13.222 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:13.222 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:13.222 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:13.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:13.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:13.222 Initialization complete. Launching workers. 00:15:13.222 Starting thread on core 1 with urgent priority queue 00:15:13.222 Starting thread on core 2 with urgent priority queue 00:15:13.222 Starting thread on core 3 with urgent priority queue 00:15:13.222 Starting thread on core 0 with urgent priority queue 00:15:13.222 SPDK bdev Controller (SPDK1 ) core 0: 6106.33 IO/s 16.38 secs/100000 ios 00:15:13.222 SPDK bdev Controller (SPDK1 ) core 1: 5959.00 IO/s 16.78 secs/100000 ios 00:15:13.222 SPDK bdev Controller (SPDK1 ) core 2: 5804.33 IO/s 17.23 secs/100000 ios 00:15:13.222 SPDK bdev Controller (SPDK1 ) core 3: 5797.00 IO/s 17.25 secs/100000 ios 00:15:13.222 ======================================================== 00:15:13.222 00:15:13.222 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:13.480 [2024-12-06 19:13:23.853281] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.480 Initializing NVMe Controllers 00:15:13.480 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.480 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.480 Namespace ID: 1 size: 0GB 00:15:13.480 Initialization complete. 00:15:13.480 INFO: using host memory buffer for IO 00:15:13.480 Hello world! 00:15:13.480 [2024-12-06 19:13:23.887953] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.480 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:13.737 [2024-12-06 19:13:24.203137] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.669 Initializing NVMe Controllers 00:15:14.669 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.669 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.669 Initialization complete. Launching workers. 00:15:14.669 submit (in ns) avg, min, max = 7916.1, 3504.4, 4017788.9 00:15:14.669 complete (in ns) avg, min, max = 27771.9, 2064.4, 4047685.6 00:15:14.669 00:15:14.669 Submit histogram 00:15:14.669 ================ 00:15:14.669 Range in us Cumulative Count 00:15:14.669 3.484 - 3.508: 0.0082% ( 1) 00:15:14.669 3.508 - 3.532: 0.1637% ( 19) 00:15:14.669 3.532 - 3.556: 0.5074% ( 42) 00:15:14.669 3.556 - 3.579: 1.7351% ( 150) 00:15:14.669 3.579 - 3.603: 4.2969% ( 313) 00:15:14.669 3.603 - 3.627: 9.6251% ( 651) 00:15:14.669 3.627 - 3.650: 17.8507% ( 1005) 00:15:14.669 3.650 - 3.674: 27.1485% ( 1136) 00:15:14.669 3.674 - 3.698: 36.0861% ( 1092) 00:15:14.669 3.698 - 3.721: 43.3213% ( 884) 00:15:14.669 3.721 - 3.745: 49.5171% ( 757) 00:15:14.669 3.745 - 3.769: 54.4443% ( 602) 00:15:14.669 3.769 - 3.793: 58.4629% ( 491) 00:15:14.669 3.793 - 3.816: 62.0396% ( 437) 00:15:14.669 3.816 - 3.840: 65.5590% ( 430) 00:15:14.669 3.840 - 3.864: 69.2994% ( 457) 00:15:14.669 3.864 - 3.887: 73.3426% ( 494) 00:15:14.669 3.887 - 3.911: 77.8278% ( 548) 00:15:14.669 3.911 - 3.935: 81.9119% ( 499) 00:15:14.669 3.935 - 3.959: 84.8175% ( 355) 00:15:14.669 3.959 - 3.982: 86.8227% ( 245) 00:15:14.669 3.982 - 4.006: 88.5742% ( 214) 00:15:14.669 4.006 - 4.030: 89.7446% ( 143) 00:15:14.669 4.030 - 4.053: 90.8905% ( 140) 00:15:14.669 4.053 - 4.077: 91.9463% ( 129) 00:15:14.669 4.077 - 4.101: 92.8957% ( 116) 00:15:14.669 4.101 - 4.124: 93.8042% ( 111) 00:15:14.669 4.124 - 4.148: 94.4426% ( 78) 00:15:14.669 4.148 - 4.172: 95.0892% ( 79) 00:15:14.669 4.172 - 4.196: 95.4575% ( 45) 00:15:14.669 4.196 - 4.219: 95.7931% ( 41) 00:15:14.669 4.219 - 4.243: 96.0304% ( 29) 00:15:14.669 4.243 - 4.267: 96.2269% ( 24) 00:15:14.669 4.267 - 4.290: 96.4642% ( 29) 00:15:14.669 4.290 - 4.314: 96.5952% ( 16) 00:15:14.669 4.314 - 4.338: 96.6770% ( 10) 00:15:14.670 4.338 - 4.361: 96.7589% ( 10) 00:15:14.670 4.361 - 4.385: 96.8735% ( 14) 00:15:14.670 4.385 - 4.409: 96.9308% ( 7) 00:15:14.670 4.409 - 4.433: 96.9881% ( 7) 00:15:14.670 4.433 - 4.456: 97.0699% ( 10) 00:15:14.670 4.456 - 4.480: 97.1272% ( 7) 00:15:14.670 4.480 - 4.504: 97.1599% ( 4) 00:15:14.670 4.504 - 4.527: 97.1681% ( 1) 00:15:14.670 4.527 - 4.551: 97.1927% ( 3) 00:15:14.670 4.551 - 4.575: 97.2254% ( 4) 00:15:14.670 4.575 - 4.599: 97.2500% ( 3) 00:15:14.670 4.622 - 4.646: 97.2745% ( 3) 00:15:14.670 4.646 - 4.670: 97.2827% ( 1) 00:15:14.670 4.670 - 4.693: 97.2909% ( 1) 00:15:14.670 4.717 - 4.741: 97.3073% ( 2) 00:15:14.670 4.741 - 4.764: 97.3318% ( 3) 00:15:14.670 4.764 - 4.788: 97.3645% ( 4) 00:15:14.670 4.788 - 4.812: 97.3809% ( 2) 00:15:14.670 4.812 - 4.836: 97.4218% ( 5) 00:15:14.670 4.836 - 4.859: 97.4628% ( 5) 00:15:14.670 4.859 - 4.883: 97.5037% ( 5) 00:15:14.670 4.883 - 4.907: 97.5610% ( 7) 00:15:14.670 4.907 - 4.930: 97.6183% ( 7) 00:15:14.670 4.930 - 4.954: 97.7001% ( 10) 00:15:14.670 4.954 - 4.978: 97.7329% ( 4) 00:15:14.670 4.978 - 5.001: 97.8065% ( 9) 00:15:14.670 5.001 - 5.025: 97.8229% ( 2) 00:15:14.670 5.025 - 5.049: 97.8474% ( 3) 00:15:14.670 5.049 - 5.073: 97.8802% ( 4) 00:15:14.670 5.073 - 5.096: 97.9211% ( 5) 00:15:14.670 5.096 - 5.120: 97.9702% ( 6) 00:15:14.670 5.120 - 5.144: 97.9784% ( 1) 00:15:14.670 5.144 - 5.167: 98.0029% ( 3) 00:15:14.670 5.167 - 5.191: 98.0111% ( 1) 00:15:14.670 5.191 - 5.215: 98.0439% ( 4) 00:15:14.670 5.239 - 5.262: 98.0521% ( 1) 00:15:14.670 5.262 - 5.286: 98.0684% ( 2) 00:15:14.670 5.286 - 5.310: 98.0930% ( 3) 00:15:14.670 5.310 - 5.333: 98.1257% ( 4) 00:15:14.670 5.333 - 5.357: 98.1585% ( 4) 00:15:14.670 5.357 - 5.381: 98.1666% ( 1) 00:15:14.670 5.381 - 5.404: 98.1748% ( 1) 00:15:14.670 5.476 - 5.499: 98.1912% ( 2) 00:15:14.670 5.499 - 5.523: 98.1994% ( 1) 00:15:14.670 5.547 - 5.570: 98.2076% ( 1) 00:15:14.670 5.760 - 5.784: 98.2157% ( 1) 00:15:14.670 5.879 - 5.902: 98.2321% ( 2) 00:15:14.670 5.950 - 5.973: 98.2403% ( 1) 00:15:14.670 5.973 - 5.997: 98.2485% ( 1) 00:15:14.670 6.116 - 6.163: 98.2567% ( 1) 00:15:14.670 6.163 - 6.210: 98.2649% ( 1) 00:15:14.670 6.258 - 6.305: 98.2812% ( 2) 00:15:14.670 6.590 - 6.637: 98.2894% ( 1) 00:15:14.670 6.684 - 6.732: 98.2976% ( 1) 00:15:14.670 6.732 - 6.779: 98.3058% ( 1) 00:15:14.670 6.779 - 6.827: 98.3140% ( 1) 00:15:14.670 7.016 - 7.064: 98.3221% ( 1) 00:15:14.670 7.159 - 7.206: 98.3385% ( 2) 00:15:14.670 7.348 - 7.396: 98.3549% ( 2) 00:15:14.670 7.396 - 7.443: 98.3631% ( 1) 00:15:14.670 7.490 - 7.538: 98.3713% ( 1) 00:15:14.670 7.538 - 7.585: 98.3876% ( 2) 00:15:14.670 7.633 - 7.680: 98.3958% ( 1) 00:15:14.670 7.822 - 7.870: 98.4040% ( 1) 00:15:14.670 7.870 - 7.917: 98.4204% ( 2) 00:15:14.670 7.917 - 7.964: 98.4367% ( 2) 00:15:14.670 8.059 - 8.107: 98.4449% ( 1) 00:15:14.670 8.107 - 8.154: 98.4531% ( 1) 00:15:14.670 8.344 - 8.391: 98.4613% ( 1) 00:15:14.670 8.391 - 8.439: 98.4695% ( 1) 00:15:14.670 8.533 - 8.581: 98.4777% ( 1) 00:15:14.670 8.581 - 8.628: 98.4858% ( 1) 00:15:14.670 8.628 - 8.676: 98.4940% ( 1) 00:15:14.670 8.676 - 8.723: 98.5104% ( 2) 00:15:14.670 8.723 - 8.770: 98.5186% ( 1) 00:15:14.670 8.865 - 8.913: 98.5268% ( 1) 00:15:14.670 8.960 - 9.007: 98.5349% ( 1) 00:15:14.670 9.007 - 9.055: 98.5431% ( 1) 00:15:14.670 9.197 - 9.244: 98.5513% ( 1) 00:15:14.670 9.292 - 9.339: 98.5677% ( 2) 00:15:14.670 9.434 - 9.481: 98.5759% ( 1) 00:15:14.670 9.529 - 9.576: 98.5841% ( 1) 00:15:14.670 9.576 - 9.624: 98.6168% ( 4) 00:15:14.670 9.624 - 9.671: 98.6332% ( 2) 00:15:14.670 9.719 - 9.766: 98.6413% ( 1) 00:15:14.670 9.956 - 10.003: 98.6577% ( 2) 00:15:14.670 10.003 - 10.050: 98.6659% ( 1) 00:15:14.670 10.098 - 10.145: 98.6741% ( 1) 00:15:14.670 10.287 - 10.335: 98.6823% ( 1) 00:15:14.670 10.477 - 10.524: 98.6905% ( 1) 00:15:14.670 10.667 - 10.714: 98.7068% ( 2) 00:15:14.670 10.809 - 10.856: 98.7150% ( 1) 00:15:14.670 10.904 - 10.951: 98.7232% ( 1) 00:15:14.670 10.999 - 11.046: 98.7314% ( 1) 00:15:14.670 11.378 - 11.425: 98.7396% ( 1) 00:15:14.670 11.473 - 11.520: 98.7477% ( 1) 00:15:14.670 11.710 - 11.757: 98.7559% ( 1) 00:15:14.670 11.947 - 11.994: 98.7641% ( 1) 00:15:14.670 11.994 - 12.041: 98.7723% ( 1) 00:15:14.670 12.136 - 12.231: 98.7887% ( 2) 00:15:14.670 12.231 - 12.326: 98.7969% ( 1) 00:15:14.670 12.516 - 12.610: 98.8050% ( 1) 00:15:14.670 12.610 - 12.705: 98.8132% ( 1) 00:15:14.670 12.705 - 12.800: 98.8214% ( 1) 00:15:14.670 12.800 - 12.895: 98.8296% ( 1) 00:15:14.670 12.990 - 13.084: 98.8378% ( 1) 00:15:14.670 13.274 - 13.369: 98.8460% ( 1) 00:15:14.670 13.464 - 13.559: 98.8541% ( 1) 00:15:14.670 13.559 - 13.653: 98.8623% ( 1) 00:15:14.670 13.843 - 13.938: 98.8705% ( 1) 00:15:14.670 13.938 - 14.033: 98.8787% ( 1) 00:15:14.670 14.317 - 14.412: 98.8869% ( 1) 00:15:14.670 14.412 - 14.507: 98.9033% ( 2) 00:15:14.670 14.507 - 14.601: 98.9114% ( 1) 00:15:14.670 14.601 - 14.696: 98.9196% ( 1) 00:15:14.670 14.696 - 14.791: 98.9278% ( 1) 00:15:14.670 14.791 - 14.886: 98.9360% ( 1) 00:15:14.670 17.256 - 17.351: 98.9442% ( 1) 00:15:14.670 17.351 - 17.446: 98.9687% ( 3) 00:15:14.670 17.446 - 17.541: 98.9769% ( 1) 00:15:14.670 17.541 - 17.636: 99.0015% ( 3) 00:15:14.670 17.636 - 17.730: 99.0588% ( 7) 00:15:14.670 17.730 - 17.825: 99.0997% ( 5) 00:15:14.670 17.825 - 17.920: 99.1979% ( 12) 00:15:14.670 17.920 - 18.015: 99.2306% ( 4) 00:15:14.670 18.015 - 18.110: 99.2388% ( 1) 00:15:14.670 18.110 - 18.204: 99.3207% ( 10) 00:15:14.670 18.204 - 18.299: 99.4271% ( 13) 00:15:14.670 18.299 - 18.394: 99.4762% ( 6) 00:15:14.670 18.394 - 18.489: 99.5498% ( 9) 00:15:14.670 18.489 - 18.584: 99.5826% ( 4) 00:15:14.670 18.584 - 18.679: 99.6399% ( 7) 00:15:14.670 18.679 - 18.773: 99.6726% ( 4) 00:15:14.670 18.773 - 18.868: 99.7299% ( 7) 00:15:14.670 18.868 - 18.963: 99.7545% ( 3) 00:15:14.670 18.963 - 19.058: 99.7626% ( 1) 00:15:14.670 19.058 - 19.153: 99.7708% ( 1) 00:15:14.670 19.153 - 19.247: 99.7954% ( 3) 00:15:14.670 19.247 - 19.342: 99.8036% ( 1) 00:15:14.670 19.342 - 19.437: 99.8118% ( 1) 00:15:14.670 19.721 - 19.816: 99.8199% ( 1) 00:15:14.670 20.385 - 20.480: 99.8281% ( 1) 00:15:14.670 21.333 - 21.428: 99.8363% ( 1) 00:15:14.670 21.713 - 21.807: 99.8445% ( 1) 00:15:14.670 22.092 - 22.187: 99.8527% ( 1) 00:15:14.670 22.756 - 22.850: 99.8609% ( 1) 00:15:14.670 22.945 - 23.040: 99.8690% ( 1) 00:15:14.670 24.178 - 24.273: 99.8772% ( 1) 00:15:14.670 24.273 - 24.462: 99.8854% ( 1) 00:15:14.670 24.841 - 25.031: 99.8936% ( 1) 00:15:14.670 26.359 - 26.548: 99.9018% ( 1) 00:15:14.670 3980.705 - 4004.978: 99.9673% ( 8) 00:15:14.670 4004.978 - 4029.250: 100.0000% ( 4) 00:15:14.670 00:15:14.670 Complete histogram 00:15:14.670 ================== 00:15:14.670 Range in us Cumulative Count 00:15:14.670 2.062 - 2.074: 1.3587% ( 166) 00:15:14.670 2.074 - 2.086: 24.8322% ( 2868) 00:15:14.670 2.086 - 2.098: 32.6731% ( 958) 00:15:14.670 2.098 - 2.110: 39.5891% ( 845) 00:15:14.670 2.110 - 2.121: 55.5165% ( 1946) 00:15:14.670 2.121 - 2.133: 58.0619% ( 311) 00:15:14.670 2.133 - 2.145: 62.5552% ( 549) 00:15:14.670 2.145 - 2.157: 70.6171% ( 985) 00:15:14.670 2.157 - 2.169: 72.0167% ( 171) 00:15:14.670 2.169 - 2.181: 76.0435% ( 492) 00:15:14.670 2.181 - 2.193: 80.1195% ( 498) 00:15:14.670 2.193 - 2.204: 80.9216% ( 98) 00:15:14.670 2.204 - 2.216: 82.5503% ( 199) 00:15:14.670 2.216 - 2.228: 85.9388% ( 414) 00:15:14.670 2.228 - 2.240: 87.8704% ( 236) 00:15:14.670 2.240 - 2.252: 90.5058% ( 322) 00:15:14.670 2.252 - 2.264: 92.4947% ( 243) 00:15:14.670 2.264 - 2.276: 92.7648% ( 33) 00:15:14.670 2.276 - 2.287: 93.2722% ( 62) 00:15:14.670 2.287 - 2.299: 93.7715% ( 61) 00:15:14.670 2.299 - 2.311: 94.3362% ( 69) 00:15:14.670 2.311 - 2.323: 94.9501% ( 75) 00:15:14.670 2.323 - 2.335: 95.0565% ( 13) 00:15:14.670 2.335 - 2.347: 95.1220% ( 8) 00:15:14.670 2.347 - 2.359: 95.1383% ( 2) 00:15:14.670 2.359 - 2.370: 95.2284% ( 11) 00:15:14.670 2.370 - 2.382: 95.3348% ( 13) 00:15:14.670 2.382 - 2.394: 95.7358% ( 49) 00:15:14.670 2.394 - 2.406: 96.0468% ( 38) 00:15:14.670 2.406 - 2.418: 96.3005% ( 31) 00:15:14.670 2.418 - 2.430: 96.5133% ( 26) 00:15:14.670 2.430 - 2.441: 96.8325% ( 39) 00:15:14.671 2.441 - 2.453: 97.0699% ( 29) 00:15:14.671 2.453 - 2.465: 97.2909% ( 27) 00:15:14.671 2.465 - 2.477: 97.5610% ( 33) 00:15:14.671 2.477 - 2.489: 97.7001% ( 17) 00:15:14.671 2.489 - 2.501: 97.8965% ( 24) 00:15:14.671 2.501 - 2.513: 98.0029% ( 13) 00:15:14.671 2.513 - 2.524: 98.1257% ( 15) 00:15:14.671 2.524 - 2.536: 98.1830% ( 7) 00:15:14.671 2.536 - 2.548: 98.2239% ( 5) 00:15:14.671 2.548 - 2.560: 98.2567% ( 4) 00:15:14.671 2.560 - 2.572: 98.2894% ( 4) 00:15:14.671 2.572 - 2.584: 98.3303% ( 5) 00:15:14.671 2.584 - 2.596: 98.3467% ( 2) 00:15:14.671 2.596 - 2.607: 98.3549% ( 1) 00:15:14.671 2.607 - 2.619: 98.3794% ( 3) 00:15:14.671 2.619 - 2.631: 98.3876% ( 1) 00:15:14.671 2.631 - 2.643: 98.4122% ( 3) 00:15:14.671 2.643 - 2.655: 98.4204% ( 1) 00:15:14.671 2.667 - 2.679: 98.4285% ( 1) 00:15:14.671 2.679 - 2.690: 98.4367% ( 1) 00:15:14.671 2.714 - 2.726: 98.4449% ( 1) 00:15:14.671 2.726 - 2.738: 98.4531% ( 1) 00:15:14.671 2.809 - 2.821: 98.4613% ( 1) 00:15:14.671 2.821 - 2.833: 98.4777% ( 2) 00:15:14.671 2.833 - 2.844: 98.4858% ( 1) 00:15:14.671 2.904 - 2.916: 98.4940% ( 1) 00:15:14.671 2.939 - 2.951: 98.5022% ( 1) 00:15:14.671 3.319 - 3.342: 98.5104% ( 1) 00:15:14.671 3.366 - 3.390: 98.5268% ( 2) 00:15:14.671 3.390 - 3.413: 98.5349% ( 1) 00:15:14.671 3.413 - 3.437: 9[2024-12-06 19:13:25.226475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.928 8.5595% ( 3) 00:15:14.928 3.461 - 3.484: 98.5677% ( 1) 00:15:14.928 3.484 - 3.508: 98.5759% ( 1) 00:15:14.928 3.508 - 3.532: 98.5841% ( 1) 00:15:14.928 3.556 - 3.579: 98.5922% ( 1) 00:15:14.928 3.579 - 3.603: 98.6004% ( 1) 00:15:14.928 3.627 - 3.650: 98.6086% ( 1) 00:15:14.928 3.674 - 3.698: 98.6250% ( 2) 00:15:14.928 3.698 - 3.721: 98.6413% ( 2) 00:15:14.928 3.745 - 3.769: 98.6495% ( 1) 00:15:14.928 3.769 - 3.793: 98.6577% ( 1) 00:15:14.928 3.793 - 3.816: 98.6659% ( 1) 00:15:14.928 3.816 - 3.840: 98.6741% ( 1) 00:15:14.928 3.840 - 3.864: 98.6823% ( 1) 00:15:14.928 3.935 - 3.959: 98.6905% ( 1) 00:15:14.928 4.030 - 4.053: 98.6986% ( 1) 00:15:14.928 4.196 - 4.219: 98.7068% ( 1) 00:15:14.928 4.433 - 4.456: 98.7150% ( 1) 00:15:14.928 5.641 - 5.665: 98.7232% ( 1) 00:15:14.928 5.665 - 5.689: 98.7314% ( 1) 00:15:14.928 5.689 - 5.713: 98.7396% ( 1) 00:15:14.928 6.021 - 6.044: 98.7477% ( 1) 00:15:14.928 6.400 - 6.447: 98.7559% ( 1) 00:15:14.928 6.637 - 6.684: 98.7641% ( 1) 00:15:14.928 6.732 - 6.779: 98.7723% ( 1) 00:15:14.928 7.064 - 7.111: 98.7805% ( 1) 00:15:14.928 7.206 - 7.253: 98.7887% ( 1) 00:15:14.928 7.253 - 7.301: 98.7969% ( 1) 00:15:14.928 7.301 - 7.348: 98.8050% ( 1) 00:15:14.928 8.012 - 8.059: 98.8132% ( 1) 00:15:14.928 8.107 - 8.154: 98.8214% ( 1) 00:15:14.928 8.154 - 8.201: 98.8296% ( 1) 00:15:14.929 8.344 - 8.391: 98.8460% ( 2) 00:15:14.929 9.624 - 9.671: 98.8541% ( 1) 00:15:14.929 15.455 - 15.550: 98.8623% ( 1) 00:15:14.929 15.550 - 15.644: 98.8787% ( 2) 00:15:14.929 15.644 - 15.739: 98.9033% ( 3) 00:15:14.929 15.739 - 15.834: 98.9196% ( 2) 00:15:14.929 15.834 - 15.929: 98.9278% ( 1) 00:15:14.929 16.024 - 16.119: 98.9687% ( 5) 00:15:14.929 16.119 - 16.213: 98.9933% ( 3) 00:15:14.929 16.213 - 16.308: 99.0178% ( 3) 00:15:14.929 16.308 - 16.403: 99.0424% ( 3) 00:15:14.929 16.403 - 16.498: 99.0588% ( 2) 00:15:14.929 16.498 - 16.593: 99.0670% ( 1) 00:15:14.929 16.593 - 16.687: 99.1079% ( 5) 00:15:14.929 16.687 - 16.782: 99.1324% ( 3) 00:15:14.929 16.782 - 16.877: 99.1652% ( 4) 00:15:14.929 16.877 - 16.972: 99.1897% ( 3) 00:15:14.929 16.972 - 17.067: 99.2061% ( 2) 00:15:14.929 17.067 - 17.161: 99.2143% ( 1) 00:15:14.929 17.161 - 17.256: 99.2388% ( 3) 00:15:14.929 17.256 - 17.351: 99.2470% ( 1) 00:15:14.929 17.351 - 17.446: 99.2798% ( 4) 00:15:14.929 17.446 - 17.541: 99.2879% ( 1) 00:15:14.929 17.730 - 17.825: 99.3043% ( 2) 00:15:14.929 17.920 - 18.015: 99.3125% ( 1) 00:15:14.929 18.110 - 18.204: 99.3207% ( 1) 00:15:14.929 18.299 - 18.394: 99.3289% ( 1) 00:15:14.929 18.584 - 18.679: 99.3452% ( 2) 00:15:14.929 18.868 - 18.963: 99.3534% ( 1) 00:15:14.929 28.634 - 28.824: 99.3616% ( 1) 00:15:14.929 3835.070 - 3859.342: 99.3698% ( 1) 00:15:14.929 3980.705 - 4004.978: 99.8036% ( 53) 00:15:14.929 4004.978 - 4029.250: 99.9918% ( 23) 00:15:14.929 4029.250 - 4053.523: 100.0000% ( 1) 00:15:14.929 00:15:14.929 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:14.929 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:14.929 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:14.929 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:14.929 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.186 [ 00:15:15.186 { 00:15:15.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.186 "subtype": "Discovery", 00:15:15.186 "listen_addresses": [], 00:15:15.186 "allow_any_host": true, 00:15:15.186 "hosts": [] 00:15:15.186 }, 00:15:15.186 { 00:15:15.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.186 "subtype": "NVMe", 00:15:15.186 "listen_addresses": [ 00:15:15.186 { 00:15:15.186 "trtype": "VFIOUSER", 00:15:15.186 "adrfam": "IPv4", 00:15:15.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.186 "trsvcid": "0" 00:15:15.186 } 00:15:15.186 ], 00:15:15.186 "allow_any_host": true, 00:15:15.186 "hosts": [], 00:15:15.186 "serial_number": "SPDK1", 00:15:15.186 "model_number": "SPDK bdev Controller", 00:15:15.186 "max_namespaces": 32, 00:15:15.186 "min_cntlid": 1, 00:15:15.186 "max_cntlid": 65519, 00:15:15.186 "namespaces": [ 00:15:15.186 { 00:15:15.186 "nsid": 1, 00:15:15.186 "bdev_name": "Malloc1", 00:15:15.186 "name": "Malloc1", 00:15:15.186 "nguid": "8968673CD1914E2383E6174FE3B90331", 00:15:15.186 "uuid": "8968673c-d191-4e23-83e6-174fe3b90331" 00:15:15.186 } 00:15:15.186 ] 00:15:15.186 }, 00:15:15.186 { 00:15:15.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.186 "subtype": "NVMe", 00:15:15.186 "listen_addresses": [ 00:15:15.186 { 00:15:15.186 "trtype": "VFIOUSER", 00:15:15.186 "adrfam": "IPv4", 00:15:15.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.186 "trsvcid": "0" 00:15:15.186 } 00:15:15.186 ], 00:15:15.186 "allow_any_host": true, 00:15:15.186 "hosts": [], 00:15:15.186 "serial_number": "SPDK2", 00:15:15.186 "model_number": "SPDK bdev Controller", 00:15:15.186 "max_namespaces": 32, 00:15:15.186 "min_cntlid": 1, 00:15:15.186 "max_cntlid": 65519, 00:15:15.186 "namespaces": [ 00:15:15.186 { 00:15:15.186 "nsid": 1, 00:15:15.186 "bdev_name": "Malloc2", 00:15:15.186 "name": "Malloc2", 00:15:15.186 "nguid": "AB4D20D0A5194FF18EAB285407D8C3E2", 00:15:15.186 "uuid": "ab4d20d0-a519-4ff1-8eab-285407d8c3e2" 00:15:15.186 } 00:15:15.186 ] 00:15:15.186 } 00:15:15.186 ] 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1095106 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:15.186 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:15.186 [2024-12-06 19:13:25.740891] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.443 Malloc3 00:15:15.443 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:15.701 [2024-12-06 19:13:26.136078] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.701 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.701 Asynchronous Event Request test 00:15:15.701 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.701 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.701 Registering asynchronous event callbacks... 00:15:15.701 Starting namespace attribute notice tests for all controllers... 00:15:15.701 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:15.701 aer_cb - Changed Namespace 00:15:15.701 Cleaning up... 00:15:15.958 [ 00:15:15.958 { 00:15:15.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.958 "subtype": "Discovery", 00:15:15.958 "listen_addresses": [], 00:15:15.958 "allow_any_host": true, 00:15:15.958 "hosts": [] 00:15:15.958 }, 00:15:15.958 { 00:15:15.958 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.958 "subtype": "NVMe", 00:15:15.958 "listen_addresses": [ 00:15:15.958 { 00:15:15.958 "trtype": "VFIOUSER", 00:15:15.958 "adrfam": "IPv4", 00:15:15.958 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.958 "trsvcid": "0" 00:15:15.958 } 00:15:15.958 ], 00:15:15.958 "allow_any_host": true, 00:15:15.958 "hosts": [], 00:15:15.958 "serial_number": "SPDK1", 00:15:15.958 "model_number": "SPDK bdev Controller", 00:15:15.958 "max_namespaces": 32, 00:15:15.958 "min_cntlid": 1, 00:15:15.958 "max_cntlid": 65519, 00:15:15.958 "namespaces": [ 00:15:15.958 { 00:15:15.958 "nsid": 1, 00:15:15.958 "bdev_name": "Malloc1", 00:15:15.958 "name": "Malloc1", 00:15:15.958 "nguid": "8968673CD1914E2383E6174FE3B90331", 00:15:15.958 "uuid": "8968673c-d191-4e23-83e6-174fe3b90331" 00:15:15.958 }, 00:15:15.958 { 00:15:15.958 "nsid": 2, 00:15:15.958 "bdev_name": "Malloc3", 00:15:15.958 "name": "Malloc3", 00:15:15.958 "nguid": "FB2956323BBD44C588656BDF01684C42", 00:15:15.958 "uuid": "fb295632-3bbd-44c5-8865-6bdf01684c42" 00:15:15.958 } 00:15:15.958 ] 00:15:15.958 }, 00:15:15.958 { 00:15:15.958 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.958 "subtype": "NVMe", 00:15:15.958 "listen_addresses": [ 00:15:15.958 { 00:15:15.958 "trtype": "VFIOUSER", 00:15:15.958 "adrfam": "IPv4", 00:15:15.958 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.958 "trsvcid": "0" 00:15:15.958 } 00:15:15.958 ], 00:15:15.958 "allow_any_host": true, 00:15:15.958 "hosts": [], 00:15:15.958 "serial_number": "SPDK2", 00:15:15.958 "model_number": "SPDK bdev Controller", 00:15:15.958 "max_namespaces": 32, 00:15:15.958 "min_cntlid": 1, 00:15:15.958 "max_cntlid": 65519, 00:15:15.958 "namespaces": [ 00:15:15.958 { 00:15:15.958 "nsid": 1, 00:15:15.958 "bdev_name": "Malloc2", 00:15:15.958 "name": "Malloc2", 00:15:15.958 "nguid": "AB4D20D0A5194FF18EAB285407D8C3E2", 00:15:15.958 "uuid": "ab4d20d0-a519-4ff1-8eab-285407d8c3e2" 00:15:15.958 } 00:15:15.958 ] 00:15:15.958 } 00:15:15.959 ] 00:15:15.959 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1095106 00:15:15.959 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.959 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:15.959 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:15.959 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:15.959 [2024-12-06 19:13:26.450470] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:15:15.959 [2024-12-06 19:13:26.450508] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095239 ] 00:15:15.959 [2024-12-06 19:13:26.500741] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:15.959 [2024-12-06 19:13:26.506001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.959 [2024-12-06 19:13:26.506052] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9eb7d96000 00:15:15.959 [2024-12-06 19:13:26.508674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.508997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.510008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.511015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.512037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.513034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.514036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.515043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.959 [2024-12-06 19:13:26.516049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.959 [2024-12-06 19:13:26.516070] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9eb7d8b000 00:15:15.959 [2024-12-06 19:13:26.517189] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.218 [2024-12-06 19:13:26.536047] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:16.218 [2024-12-06 19:13:26.536085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:16.218 [2024-12-06 19:13:26.538178] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:16.218 [2024-12-06 19:13:26.538235] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:16.218 [2024-12-06 19:13:26.538332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:16.218 [2024-12-06 19:13:26.538356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:16.218 [2024-12-06 19:13:26.538366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:16.218 [2024-12-06 19:13:26.539163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:16.218 [2024-12-06 19:13:26.539189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:16.218 [2024-12-06 19:13:26.539203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:16.218 [2024-12-06 19:13:26.540169] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:16.218 [2024-12-06 19:13:26.540191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:16.218 [2024-12-06 19:13:26.540206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.541175] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:16.218 [2024-12-06 19:13:26.541196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.542180] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:16.218 [2024-12-06 19:13:26.542201] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:16.218 [2024-12-06 19:13:26.542210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.542222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.542332] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:16.218 [2024-12-06 19:13:26.542340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.542348] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:16.218 [2024-12-06 19:13:26.543185] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:16.218 [2024-12-06 19:13:26.544190] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:16.218 [2024-12-06 19:13:26.545198] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:16.218 [2024-12-06 19:13:26.546191] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.218 [2024-12-06 19:13:26.546277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.218 [2024-12-06 19:13:26.547207] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:16.218 [2024-12-06 19:13:26.547227] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.218 [2024-12-06 19:13:26.547237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:16.218 [2024-12-06 19:13:26.547260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:16.218 [2024-12-06 19:13:26.547278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.218 [2024-12-06 19:13:26.547305] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.218 [2024-12-06 19:13:26.547314] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.218 [2024-12-06 19:13:26.547320] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.218 [2024-12-06 19:13:26.547338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.218 [2024-12-06 19:13:26.553682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:16.218 [2024-12-06 19:13:26.553708] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:16.218 [2024-12-06 19:13:26.553717] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:16.218 [2024-12-06 19:13:26.553725] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:16.218 [2024-12-06 19:13:26.553738] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:16.218 [2024-12-06 19:13:26.553747] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:16.218 [2024-12-06 19:13:26.553756] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:16.218 [2024-12-06 19:13:26.553764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:16.218 [2024-12-06 19:13:26.553777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.218 [2024-12-06 19:13:26.553794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:16.218 [2024-12-06 19:13:26.561694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.561719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.219 [2024-12-06 19:13:26.561733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.219 [2024-12-06 19:13:26.561744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.219 [2024-12-06 19:13:26.561756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.219 [2024-12-06 19:13:26.561764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.561782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.561797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.569690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.569710] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:16.219 [2024-12-06 19:13:26.569720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.569737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.569749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.569763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.577691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.577773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.577791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.577805] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:16.219 [2024-12-06 19:13:26.577817] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:16.219 [2024-12-06 19:13:26.577824] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.577833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.585674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.585717] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:16.219 [2024-12-06 19:13:26.585734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.585750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.585762] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.219 [2024-12-06 19:13:26.585771] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.219 [2024-12-06 19:13:26.585776] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.585786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.593674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.593699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.593715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.593729] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.219 [2024-12-06 19:13:26.593737] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.219 [2024-12-06 19:13:26.593743] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.593753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.601690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.601718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601783] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.219 [2024-12-06 19:13:26.601794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:16.219 [2024-12-06 19:13:26.601804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:16.219 [2024-12-06 19:13:26.601831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.609691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.609729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.617696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.617721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.625678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.625702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.633674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.633706] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:16.219 [2024-12-06 19:13:26.633718] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:16.219 [2024-12-06 19:13:26.633724] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:16.219 [2024-12-06 19:13:26.633730] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:16.219 [2024-12-06 19:13:26.633736] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:16.219 [2024-12-06 19:13:26.633745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:16.219 [2024-12-06 19:13:26.633757] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:16.219 [2024-12-06 19:13:26.633765] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:16.219 [2024-12-06 19:13:26.633771] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.633780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.633791] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:16.219 [2024-12-06 19:13:26.633799] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.219 [2024-12-06 19:13:26.633805] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.633813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.633826] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:16.219 [2024-12-06 19:13:26.633834] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:16.219 [2024-12-06 19:13:26.633839] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.219 [2024-12-06 19:13:26.633848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:16.219 [2024-12-06 19:13:26.641678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.641705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.641724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:16.219 [2024-12-06 19:13:26.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:16.219 ===================================================== 00:15:16.219 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.219 ===================================================== 00:15:16.219 Controller Capabilities/Features 00:15:16.219 ================================ 00:15:16.219 Vendor ID: 4e58 00:15:16.219 Subsystem Vendor ID: 4e58 00:15:16.219 Serial Number: SPDK2 00:15:16.219 Model Number: SPDK bdev Controller 00:15:16.219 Firmware Version: 25.01 00:15:16.219 Recommended Arb Burst: 6 00:15:16.219 IEEE OUI Identifier: 8d 6b 50 00:15:16.219 Multi-path I/O 00:15:16.219 May have multiple subsystem ports: Yes 00:15:16.219 May have multiple controllers: Yes 00:15:16.220 Associated with SR-IOV VF: No 00:15:16.220 Max Data Transfer Size: 131072 00:15:16.220 Max Number of Namespaces: 32 00:15:16.220 Max Number of I/O Queues: 127 00:15:16.220 NVMe Specification Version (VS): 1.3 00:15:16.220 NVMe Specification Version (Identify): 1.3 00:15:16.220 Maximum Queue Entries: 256 00:15:16.220 Contiguous Queues Required: Yes 00:15:16.220 Arbitration Mechanisms Supported 00:15:16.220 Weighted Round Robin: Not Supported 00:15:16.220 Vendor Specific: Not Supported 00:15:16.220 Reset Timeout: 15000 ms 00:15:16.220 Doorbell Stride: 4 bytes 00:15:16.220 NVM Subsystem Reset: Not Supported 00:15:16.220 Command Sets Supported 00:15:16.220 NVM Command Set: Supported 00:15:16.220 Boot Partition: Not Supported 00:15:16.220 Memory Page Size Minimum: 4096 bytes 00:15:16.220 Memory Page Size Maximum: 4096 bytes 00:15:16.220 Persistent Memory Region: Not Supported 00:15:16.220 Optional Asynchronous Events Supported 00:15:16.220 Namespace Attribute Notices: Supported 00:15:16.220 Firmware Activation Notices: Not Supported 00:15:16.220 ANA Change Notices: Not Supported 00:15:16.220 PLE Aggregate Log Change Notices: Not Supported 00:15:16.220 LBA Status Info Alert Notices: Not Supported 00:15:16.220 EGE Aggregate Log Change Notices: Not Supported 00:15:16.220 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.220 Zone Descriptor Change Notices: Not Supported 00:15:16.220 Discovery Log Change Notices: Not Supported 00:15:16.220 Controller Attributes 00:15:16.220 128-bit Host Identifier: Supported 00:15:16.220 Non-Operational Permissive Mode: Not Supported 00:15:16.220 NVM Sets: Not Supported 00:15:16.220 Read Recovery Levels: Not Supported 00:15:16.220 Endurance Groups: Not Supported 00:15:16.220 Predictable Latency Mode: Not Supported 00:15:16.220 Traffic Based Keep ALive: Not Supported 00:15:16.220 Namespace Granularity: Not Supported 00:15:16.220 SQ Associations: Not Supported 00:15:16.220 UUID List: Not Supported 00:15:16.220 Multi-Domain Subsystem: Not Supported 00:15:16.220 Fixed Capacity Management: Not Supported 00:15:16.220 Variable Capacity Management: Not Supported 00:15:16.220 Delete Endurance Group: Not Supported 00:15:16.220 Delete NVM Set: Not Supported 00:15:16.220 Extended LBA Formats Supported: Not Supported 00:15:16.220 Flexible Data Placement Supported: Not Supported 00:15:16.220 00:15:16.220 Controller Memory Buffer Support 00:15:16.220 ================================ 00:15:16.220 Supported: No 00:15:16.220 00:15:16.220 Persistent Memory Region Support 00:15:16.220 ================================ 00:15:16.220 Supported: No 00:15:16.220 00:15:16.220 Admin Command Set Attributes 00:15:16.220 ============================ 00:15:16.220 Security Send/Receive: Not Supported 00:15:16.220 Format NVM: Not Supported 00:15:16.220 Firmware Activate/Download: Not Supported 00:15:16.220 Namespace Management: Not Supported 00:15:16.220 Device Self-Test: Not Supported 00:15:16.220 Directives: Not Supported 00:15:16.220 NVMe-MI: Not Supported 00:15:16.220 Virtualization Management: Not Supported 00:15:16.220 Doorbell Buffer Config: Not Supported 00:15:16.220 Get LBA Status Capability: Not Supported 00:15:16.220 Command & Feature Lockdown Capability: Not Supported 00:15:16.220 Abort Command Limit: 4 00:15:16.220 Async Event Request Limit: 4 00:15:16.220 Number of Firmware Slots: N/A 00:15:16.220 Firmware Slot 1 Read-Only: N/A 00:15:16.220 Firmware Activation Without Reset: N/A 00:15:16.220 Multiple Update Detection Support: N/A 00:15:16.220 Firmware Update Granularity: No Information Provided 00:15:16.220 Per-Namespace SMART Log: No 00:15:16.220 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.220 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:16.220 Command Effects Log Page: Supported 00:15:16.220 Get Log Page Extended Data: Supported 00:15:16.220 Telemetry Log Pages: Not Supported 00:15:16.220 Persistent Event Log Pages: Not Supported 00:15:16.220 Supported Log Pages Log Page: May Support 00:15:16.220 Commands Supported & Effects Log Page: Not Supported 00:15:16.220 Feature Identifiers & Effects Log Page:May Support 00:15:16.220 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.220 Data Area 4 for Telemetry Log: Not Supported 00:15:16.220 Error Log Page Entries Supported: 128 00:15:16.220 Keep Alive: Supported 00:15:16.220 Keep Alive Granularity: 10000 ms 00:15:16.220 00:15:16.220 NVM Command Set Attributes 00:15:16.220 ========================== 00:15:16.220 Submission Queue Entry Size 00:15:16.220 Max: 64 00:15:16.220 Min: 64 00:15:16.220 Completion Queue Entry Size 00:15:16.220 Max: 16 00:15:16.220 Min: 16 00:15:16.220 Number of Namespaces: 32 00:15:16.220 Compare Command: Supported 00:15:16.220 Write Uncorrectable Command: Not Supported 00:15:16.220 Dataset Management Command: Supported 00:15:16.220 Write Zeroes Command: Supported 00:15:16.220 Set Features Save Field: Not Supported 00:15:16.220 Reservations: Not Supported 00:15:16.220 Timestamp: Not Supported 00:15:16.220 Copy: Supported 00:15:16.220 Volatile Write Cache: Present 00:15:16.220 Atomic Write Unit (Normal): 1 00:15:16.220 Atomic Write Unit (PFail): 1 00:15:16.220 Atomic Compare & Write Unit: 1 00:15:16.220 Fused Compare & Write: Supported 00:15:16.220 Scatter-Gather List 00:15:16.220 SGL Command Set: Supported (Dword aligned) 00:15:16.220 SGL Keyed: Not Supported 00:15:16.220 SGL Bit Bucket Descriptor: Not Supported 00:15:16.220 SGL Metadata Pointer: Not Supported 00:15:16.220 Oversized SGL: Not Supported 00:15:16.220 SGL Metadata Address: Not Supported 00:15:16.220 SGL Offset: Not Supported 00:15:16.220 Transport SGL Data Block: Not Supported 00:15:16.220 Replay Protected Memory Block: Not Supported 00:15:16.220 00:15:16.220 Firmware Slot Information 00:15:16.220 ========================= 00:15:16.220 Active slot: 1 00:15:16.220 Slot 1 Firmware Revision: 25.01 00:15:16.220 00:15:16.220 00:15:16.220 Commands Supported and Effects 00:15:16.220 ============================== 00:15:16.220 Admin Commands 00:15:16.220 -------------- 00:15:16.220 Get Log Page (02h): Supported 00:15:16.220 Identify (06h): Supported 00:15:16.220 Abort (08h): Supported 00:15:16.220 Set Features (09h): Supported 00:15:16.220 Get Features (0Ah): Supported 00:15:16.220 Asynchronous Event Request (0Ch): Supported 00:15:16.220 Keep Alive (18h): Supported 00:15:16.220 I/O Commands 00:15:16.220 ------------ 00:15:16.220 Flush (00h): Supported LBA-Change 00:15:16.220 Write (01h): Supported LBA-Change 00:15:16.220 Read (02h): Supported 00:15:16.220 Compare (05h): Supported 00:15:16.220 Write Zeroes (08h): Supported LBA-Change 00:15:16.220 Dataset Management (09h): Supported LBA-Change 00:15:16.220 Copy (19h): Supported LBA-Change 00:15:16.220 00:15:16.220 Error Log 00:15:16.220 ========= 00:15:16.220 00:15:16.220 Arbitration 00:15:16.220 =========== 00:15:16.220 Arbitration Burst: 1 00:15:16.220 00:15:16.220 Power Management 00:15:16.220 ================ 00:15:16.220 Number of Power States: 1 00:15:16.220 Current Power State: Power State #0 00:15:16.220 Power State #0: 00:15:16.220 Max Power: 0.00 W 00:15:16.220 Non-Operational State: Operational 00:15:16.220 Entry Latency: Not Reported 00:15:16.220 Exit Latency: Not Reported 00:15:16.220 Relative Read Throughput: 0 00:15:16.220 Relative Read Latency: 0 00:15:16.220 Relative Write Throughput: 0 00:15:16.220 Relative Write Latency: 0 00:15:16.220 Idle Power: Not Reported 00:15:16.220 Active Power: Not Reported 00:15:16.220 Non-Operational Permissive Mode: Not Supported 00:15:16.220 00:15:16.220 Health Information 00:15:16.220 ================== 00:15:16.220 Critical Warnings: 00:15:16.220 Available Spare Space: OK 00:15:16.220 Temperature: OK 00:15:16.220 Device Reliability: OK 00:15:16.220 Read Only: No 00:15:16.220 Volatile Memory Backup: OK 00:15:16.220 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:16.220 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.220 Available Spare: 0% 00:15:16.220 Available Sp[2024-12-06 19:13:26.641866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:16.220 [2024-12-06 19:13:26.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:16.220 [2024-12-06 19:13:26.649734] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:16.220 [2024-12-06 19:13:26.649752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.220 [2024-12-06 19:13:26.649763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.220 [2024-12-06 19:13:26.649773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.221 [2024-12-06 19:13:26.649782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.221 [2024-12-06 19:13:26.649873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:16.221 [2024-12-06 19:13:26.649894] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:16.221 [2024-12-06 19:13:26.650873] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.221 [2024-12-06 19:13:26.650962] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:16.221 [2024-12-06 19:13:26.650992] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:16.221 [2024-12-06 19:13:26.651880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:16.221 [2024-12-06 19:13:26.651905] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:16.221 [2024-12-06 19:13:26.651973] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:16.221 [2024-12-06 19:13:26.653204] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.221 are Threshold: 0% 00:15:16.221 Life Percentage Used: 0% 00:15:16.221 Data Units Read: 0 00:15:16.221 Data Units Written: 0 00:15:16.221 Host Read Commands: 0 00:15:16.221 Host Write Commands: 0 00:15:16.221 Controller Busy Time: 0 minutes 00:15:16.221 Power Cycles: 0 00:15:16.221 Power On Hours: 0 hours 00:15:16.221 Unsafe Shutdowns: 0 00:15:16.221 Unrecoverable Media Errors: 0 00:15:16.221 Lifetime Error Log Entries: 0 00:15:16.221 Warning Temperature Time: 0 minutes 00:15:16.221 Critical Temperature Time: 0 minutes 00:15:16.221 00:15:16.221 Number of Queues 00:15:16.221 ================ 00:15:16.221 Number of I/O Submission Queues: 127 00:15:16.221 Number of I/O Completion Queues: 127 00:15:16.221 00:15:16.221 Active Namespaces 00:15:16.221 ================= 00:15:16.221 Namespace ID:1 00:15:16.221 Error Recovery Timeout: Unlimited 00:15:16.221 Command Set Identifier: NVM (00h) 00:15:16.221 Deallocate: Supported 00:15:16.221 Deallocated/Unwritten Error: Not Supported 00:15:16.221 Deallocated Read Value: Unknown 00:15:16.221 Deallocate in Write Zeroes: Not Supported 00:15:16.221 Deallocated Guard Field: 0xFFFF 00:15:16.221 Flush: Supported 00:15:16.221 Reservation: Supported 00:15:16.221 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.221 Size (in LBAs): 131072 (0GiB) 00:15:16.221 Capacity (in LBAs): 131072 (0GiB) 00:15:16.221 Utilization (in LBAs): 131072 (0GiB) 00:15:16.221 NGUID: AB4D20D0A5194FF18EAB285407D8C3E2 00:15:16.221 UUID: ab4d20d0-a519-4ff1-8eab-285407d8c3e2 00:15:16.221 Thin Provisioning: Not Supported 00:15:16.221 Per-NS Atomic Units: Yes 00:15:16.221 Atomic Boundary Size (Normal): 0 00:15:16.221 Atomic Boundary Size (PFail): 0 00:15:16.221 Atomic Boundary Offset: 0 00:15:16.221 Maximum Single Source Range Length: 65535 00:15:16.221 Maximum Copy Length: 65535 00:15:16.221 Maximum Source Range Count: 1 00:15:16.221 NGUID/EUI64 Never Reused: No 00:15:16.221 Namespace Write Protected: No 00:15:16.221 Number of LBA Formats: 1 00:15:16.221 Current LBA Format: LBA Format #00 00:15:16.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.221 00:15:16.221 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:16.479 [2024-12-06 19:13:26.904484] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.742 Initializing NVMe Controllers 00:15:21.742 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:21.742 Initialization complete. Launching workers. 00:15:21.742 ======================================================== 00:15:21.742 Latency(us) 00:15:21.742 Device Information : IOPS MiB/s Average min max 00:15:21.742 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31315.15 122.32 4087.03 1199.81 10076.29 00:15:21.742 ======================================================== 00:15:21.742 Total : 31315.15 122.32 4087.03 1199.81 10076.29 00:15:21.742 00:15:21.742 [2024-12-06 19:13:32.009033] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.742 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:21.742 [2024-12-06 19:13:32.272763] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.025 Initializing NVMe Controllers 00:15:27.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.025 Initialization complete. Launching workers. 00:15:27.025 ======================================================== 00:15:27.025 Latency(us) 00:15:27.025 Device Information : IOPS MiB/s Average min max 00:15:27.025 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30113.89 117.63 4250.73 1210.37 7540.81 00:15:27.025 ======================================================== 00:15:27.025 Total : 30113.89 117.63 4250.73 1210.37 7540.81 00:15:27.025 00:15:27.025 [2024-12-06 19:13:37.295579] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.025 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:27.025 [2024-12-06 19:13:37.531588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.331 [2024-12-06 19:13:42.661818] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.331 Initializing NVMe Controllers 00:15:32.331 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.331 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.331 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:32.331 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:32.331 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:32.331 Initialization complete. Launching workers. 00:15:32.331 Starting thread on core 2 00:15:32.331 Starting thread on core 3 00:15:32.331 Starting thread on core 1 00:15:32.331 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:32.592 [2024-12-06 19:13:42.984295] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.883 [2024-12-06 19:13:46.060079] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.883 Initializing NVMe Controllers 00:15:35.883 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.883 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.883 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:35.883 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:35.883 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:35.883 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:35.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:35.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:35.883 Initialization complete. Launching workers. 00:15:35.883 Starting thread on core 1 with urgent priority queue 00:15:35.883 Starting thread on core 2 with urgent priority queue 00:15:35.883 Starting thread on core 3 with urgent priority queue 00:15:35.883 Starting thread on core 0 with urgent priority queue 00:15:35.883 SPDK bdev Controller (SPDK2 ) core 0: 5391.67 IO/s 18.55 secs/100000 ios 00:15:35.883 SPDK bdev Controller (SPDK2 ) core 1: 5053.67 IO/s 19.79 secs/100000 ios 00:15:35.883 SPDK bdev Controller (SPDK2 ) core 2: 5302.33 IO/s 18.86 secs/100000 ios 00:15:35.883 SPDK bdev Controller (SPDK2 ) core 3: 5393.00 IO/s 18.54 secs/100000 ios 00:15:35.883 ======================================================== 00:15:35.883 00:15:35.883 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:35.883 [2024-12-06 19:13:46.379343] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.883 Initializing NVMe Controllers 00:15:35.883 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.883 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.883 Namespace ID: 1 size: 0GB 00:15:35.883 Initialization complete. 00:15:35.883 INFO: using host memory buffer for IO 00:15:35.883 Hello world! 00:15:35.883 [2024-12-06 19:13:46.389413] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.883 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:36.144 [2024-12-06 19:13:46.706097] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.524 Initializing NVMe Controllers 00:15:37.524 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.524 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.524 Initialization complete. Launching workers. 00:15:37.524 submit (in ns) avg, min, max = 5523.5, 3484.4, 4016057.8 00:15:37.524 complete (in ns) avg, min, max = 31827.0, 2072.2, 4022791.1 00:15:37.524 00:15:37.524 Submit histogram 00:15:37.524 ================ 00:15:37.524 Range in us Cumulative Count 00:15:37.524 3.484 - 3.508: 0.0164% ( 2) 00:15:37.524 3.508 - 3.532: 0.3927% ( 46) 00:15:37.524 3.532 - 3.556: 1.2681% ( 107) 00:15:37.524 3.556 - 3.579: 3.7552% ( 304) 00:15:37.524 3.579 - 3.603: 7.6250% ( 473) 00:15:37.524 3.603 - 3.627: 15.0863% ( 912) 00:15:37.524 3.627 - 3.650: 24.2166% ( 1116) 00:15:37.524 3.650 - 3.674: 33.0852% ( 1084) 00:15:37.524 3.674 - 3.698: 40.6201% ( 921) 00:15:37.524 3.698 - 3.721: 47.4843% ( 839) 00:15:37.524 3.721 - 3.745: 53.3584% ( 718) 00:15:37.524 3.745 - 3.769: 57.8500% ( 549) 00:15:37.524 3.769 - 3.793: 61.9733% ( 504) 00:15:37.524 3.793 - 3.816: 64.9922% ( 369) 00:15:37.524 3.816 - 3.840: 68.7556% ( 460) 00:15:37.524 3.840 - 3.864: 72.3881% ( 444) 00:15:37.524 3.864 - 3.887: 76.7406% ( 532) 00:15:37.524 3.887 - 3.911: 80.7494% ( 490) 00:15:37.524 3.911 - 3.935: 84.2265% ( 425) 00:15:37.524 3.935 - 3.959: 86.4027% ( 266) 00:15:37.524 3.959 - 3.982: 87.9571% ( 190) 00:15:37.524 3.982 - 4.006: 89.5525% ( 195) 00:15:37.524 4.006 - 4.030: 90.9024% ( 165) 00:15:37.524 4.030 - 4.053: 91.9005% ( 122) 00:15:37.524 4.053 - 4.077: 92.7759% ( 107) 00:15:37.524 4.077 - 4.101: 93.5940% ( 100) 00:15:37.524 4.101 - 4.124: 94.3631% ( 94) 00:15:37.524 4.124 - 4.148: 94.8949% ( 65) 00:15:37.524 4.148 - 4.172: 95.4103% ( 63) 00:15:37.524 4.172 - 4.196: 95.7212% ( 38) 00:15:37.524 4.196 - 4.219: 95.9584% ( 29) 00:15:37.524 4.219 - 4.243: 96.1957% ( 29) 00:15:37.524 4.243 - 4.267: 96.4493% ( 31) 00:15:37.524 4.267 - 4.290: 96.5557% ( 13) 00:15:37.524 4.290 - 4.314: 96.6784% ( 15) 00:15:37.524 4.314 - 4.338: 96.7111% ( 4) 00:15:37.525 4.338 - 4.361: 96.8420% ( 16) 00:15:37.525 4.361 - 4.385: 96.9320% ( 11) 00:15:37.525 4.385 - 4.409: 97.0220% ( 11) 00:15:37.525 4.409 - 4.433: 97.0466% ( 3) 00:15:37.525 4.433 - 4.456: 97.0956% ( 6) 00:15:37.525 4.456 - 4.480: 97.1202% ( 3) 00:15:37.525 4.480 - 4.504: 97.1611% ( 5) 00:15:37.525 4.504 - 4.527: 97.1938% ( 4) 00:15:37.525 4.527 - 4.551: 97.2020% ( 1) 00:15:37.525 4.551 - 4.575: 97.2102% ( 1) 00:15:37.525 4.575 - 4.599: 97.2265% ( 2) 00:15:37.525 4.622 - 4.646: 97.2347% ( 1) 00:15:37.525 4.693 - 4.717: 97.2593% ( 3) 00:15:37.525 4.717 - 4.741: 97.2838% ( 3) 00:15:37.525 4.764 - 4.788: 97.3084% ( 3) 00:15:37.525 4.788 - 4.812: 97.3738% ( 8) 00:15:37.525 4.812 - 4.836: 97.4147% ( 5) 00:15:37.525 4.836 - 4.859: 97.4802% ( 8) 00:15:37.525 4.859 - 4.883: 97.5456% ( 8) 00:15:37.525 4.883 - 4.907: 97.6111% ( 8) 00:15:37.525 4.907 - 4.930: 97.6847% ( 9) 00:15:37.525 4.930 - 4.954: 97.7420% ( 7) 00:15:37.525 4.954 - 4.978: 97.7910% ( 6) 00:15:37.525 4.978 - 5.001: 97.8401% ( 6) 00:15:37.525 5.001 - 5.025: 97.8892% ( 6) 00:15:37.525 5.025 - 5.049: 97.9301% ( 5) 00:15:37.525 5.049 - 5.073: 97.9792% ( 6) 00:15:37.525 5.073 - 5.096: 98.0201% ( 5) 00:15:37.525 5.096 - 5.120: 98.0610% ( 5) 00:15:37.525 5.120 - 5.144: 98.0856% ( 3) 00:15:37.525 5.144 - 5.167: 98.0938% ( 1) 00:15:37.525 5.167 - 5.191: 98.1101% ( 2) 00:15:37.525 5.191 - 5.215: 98.1265% ( 2) 00:15:37.525 5.215 - 5.239: 98.1428% ( 2) 00:15:37.525 5.262 - 5.286: 98.1510% ( 1) 00:15:37.525 5.286 - 5.310: 98.1674% ( 2) 00:15:37.525 5.310 - 5.333: 98.1756% ( 1) 00:15:37.525 5.333 - 5.357: 98.1838% ( 1) 00:15:37.525 5.381 - 5.404: 98.1919% ( 1) 00:15:37.525 5.404 - 5.428: 98.2001% ( 1) 00:15:37.525 5.476 - 5.499: 98.2247% ( 3) 00:15:37.525 5.499 - 5.523: 98.2328% ( 1) 00:15:37.525 5.547 - 5.570: 98.2492% ( 2) 00:15:37.525 5.570 - 5.594: 98.2574% ( 1) 00:15:37.525 5.594 - 5.618: 98.2656% ( 1) 00:15:37.525 5.618 - 5.641: 98.2737% ( 1) 00:15:37.525 6.116 - 6.163: 98.2983% ( 3) 00:15:37.525 6.210 - 6.258: 98.3065% ( 1) 00:15:37.525 6.258 - 6.305: 98.3147% ( 1) 00:15:37.525 6.353 - 6.400: 98.3310% ( 2) 00:15:37.525 6.447 - 6.495: 98.3392% ( 1) 00:15:37.525 6.590 - 6.637: 98.3474% ( 1) 00:15:37.525 6.637 - 6.684: 98.3556% ( 1) 00:15:37.525 6.779 - 6.827: 98.3637% ( 1) 00:15:37.525 6.827 - 6.874: 98.3719% ( 1) 00:15:37.525 6.969 - 7.016: 98.3801% ( 1) 00:15:37.525 7.159 - 7.206: 98.3883% ( 1) 00:15:37.525 7.206 - 7.253: 98.3965% ( 1) 00:15:37.525 7.253 - 7.301: 98.4046% ( 1) 00:15:37.525 7.301 - 7.348: 98.4128% ( 1) 00:15:37.525 7.348 - 7.396: 98.4210% ( 1) 00:15:37.525 7.490 - 7.538: 98.4292% ( 1) 00:15:37.525 7.538 - 7.585: 98.4374% ( 1) 00:15:37.525 7.585 - 7.633: 98.4456% ( 1) 00:15:37.525 7.633 - 7.680: 98.4619% ( 2) 00:15:37.525 7.870 - 7.917: 98.4701% ( 1) 00:15:37.525 7.964 - 8.012: 98.4783% ( 1) 00:15:37.525 8.012 - 8.059: 98.4865% ( 1) 00:15:37.525 8.059 - 8.107: 98.4946% ( 1) 00:15:37.525 8.107 - 8.154: 98.5028% ( 1) 00:15:37.525 8.249 - 8.296: 98.5274% ( 3) 00:15:37.525 8.296 - 8.344: 98.5355% ( 1) 00:15:37.525 8.344 - 8.391: 98.5437% ( 1) 00:15:37.525 8.486 - 8.533: 98.5519% ( 1) 00:15:37.525 8.581 - 8.628: 98.5683% ( 2) 00:15:37.525 8.676 - 8.723: 98.5765% ( 1) 00:15:37.525 8.723 - 8.770: 98.5928% ( 2) 00:15:37.525 8.818 - 8.865: 98.6010% ( 1) 00:15:37.525 8.865 - 8.913: 98.6092% ( 1) 00:15:37.525 9.007 - 9.055: 98.6174% ( 1) 00:15:37.525 9.102 - 9.150: 98.6255% ( 1) 00:15:37.525 9.150 - 9.197: 98.6337% ( 1) 00:15:37.525 9.197 - 9.244: 98.6419% ( 1) 00:15:37.525 9.292 - 9.339: 98.6501% ( 1) 00:15:37.525 9.339 - 9.387: 98.6583% ( 1) 00:15:37.525 9.624 - 9.671: 98.6664% ( 1) 00:15:37.525 9.671 - 9.719: 98.6746% ( 1) 00:15:37.525 9.813 - 9.861: 98.6828% ( 1) 00:15:37.525 9.956 - 10.003: 98.6910% ( 1) 00:15:37.525 10.003 - 10.050: 98.6992% ( 1) 00:15:37.525 10.145 - 10.193: 98.7155% ( 2) 00:15:37.525 10.240 - 10.287: 98.7237% ( 1) 00:15:37.525 10.287 - 10.335: 98.7319% ( 1) 00:15:37.525 10.430 - 10.477: 98.7401% ( 1) 00:15:37.525 10.524 - 10.572: 98.7483% ( 1) 00:15:37.525 10.572 - 10.619: 98.7646% ( 2) 00:15:37.525 10.619 - 10.667: 98.7728% ( 1) 00:15:37.525 10.667 - 10.714: 98.7810% ( 1) 00:15:37.525 10.809 - 10.856: 98.7892% ( 1) 00:15:37.525 11.141 - 11.188: 98.7973% ( 1) 00:15:37.525 11.425 - 11.473: 98.8055% ( 1) 00:15:37.525 11.662 - 11.710: 98.8137% ( 1) 00:15:37.525 11.757 - 11.804: 98.8219% ( 1) 00:15:37.525 12.326 - 12.421: 98.8301% ( 1) 00:15:37.525 12.516 - 12.610: 98.8383% ( 1) 00:15:37.525 12.705 - 12.800: 98.8464% ( 1) 00:15:37.525 12.800 - 12.895: 98.8546% ( 1) 00:15:37.525 12.990 - 13.084: 98.8628% ( 1) 00:15:37.525 13.084 - 13.179: 98.8873% ( 3) 00:15:37.525 13.274 - 13.369: 98.8955% ( 1) 00:15:37.525 13.369 - 13.464: 98.9119% ( 2) 00:15:37.525 13.464 - 13.559: 98.9446% ( 4) 00:15:37.525 13.559 - 13.653: 98.9773% ( 4) 00:15:37.525 13.748 - 13.843: 98.9937% ( 2) 00:15:37.525 13.843 - 13.938: 99.0019% ( 1) 00:15:37.525 13.938 - 14.033: 99.0101% ( 1) 00:15:37.525 14.033 - 14.127: 99.0182% ( 1) 00:15:37.525 14.222 - 14.317: 99.0264% ( 1) 00:15:37.525 14.696 - 14.791: 99.0428% ( 2) 00:15:37.525 14.791 - 14.886: 99.0592% ( 2) 00:15:37.525 14.886 - 14.981: 99.0755% ( 2) 00:15:37.525 14.981 - 15.076: 99.0837% ( 1) 00:15:37.525 17.067 - 17.161: 99.0919% ( 1) 00:15:37.525 17.161 - 17.256: 99.1082% ( 2) 00:15:37.525 17.256 - 17.351: 99.1246% ( 2) 00:15:37.525 17.351 - 17.446: 99.1410% ( 2) 00:15:37.525 17.446 - 17.541: 99.1491% ( 1) 00:15:37.525 17.541 - 17.636: 99.1819% ( 4) 00:15:37.525 17.636 - 17.730: 99.1901% ( 1) 00:15:37.525 17.730 - 17.825: 99.2310% ( 5) 00:15:37.525 17.825 - 17.920: 99.2882% ( 7) 00:15:37.525 17.920 - 18.015: 99.3537% ( 8) 00:15:37.525 18.015 - 18.110: 99.4109% ( 7) 00:15:37.525 18.110 - 18.204: 99.5091% ( 12) 00:15:37.525 18.204 - 18.299: 99.5337% ( 3) 00:15:37.525 18.299 - 18.394: 99.5991% ( 8) 00:15:37.525 18.394 - 18.489: 99.6646% ( 8) 00:15:37.525 18.489 - 18.584: 99.7137% ( 6) 00:15:37.525 18.584 - 18.679: 99.7546% ( 5) 00:15:37.525 18.679 - 18.773: 99.7873% ( 4) 00:15:37.525 18.773 - 18.868: 99.8282% ( 5) 00:15:37.525 18.868 - 18.963: 99.8446% ( 2) 00:15:37.525 18.963 - 19.058: 99.8527% ( 1) 00:15:37.525 19.058 - 19.153: 99.8609% ( 1) 00:15:37.525 19.153 - 19.247: 99.8691% ( 1) 00:15:37.525 19.437 - 19.532: 99.8773% ( 1) 00:15:37.525 19.627 - 19.721: 99.8855% ( 1) 00:15:37.525 22.187 - 22.281: 99.8936% ( 1) 00:15:37.525 22.566 - 22.661: 99.9018% ( 1) 00:15:37.525 22.661 - 22.756: 99.9100% ( 1) 00:15:37.525 24.652 - 24.841: 99.9182% ( 1) 00:15:37.525 26.359 - 26.548: 99.9264% ( 1) 00:15:37.525 27.876 - 28.065: 99.9345% ( 1) 00:15:37.525 28.824 - 29.013: 99.9427% ( 1) 00:15:37.525 29.772 - 29.961: 99.9509% ( 1) 00:15:37.525 31.289 - 31.479: 99.9591% ( 1) 00:15:37.525 3009.801 - 3021.938: 99.9673% ( 1) 00:15:37.525 3980.705 - 4004.978: 99.9755% ( 1) 00:15:37.525 4004.978 - 4029.250: 100.0000% ( 3) 00:15:37.525 00:15:37.525 Complete histogram 00:15:37.525 ================== 00:15:37.525 Range in us Cumulative Count 00:15:37.525 2.062 - 2.074: 0.0409% ( 5) 00:15:37.525 2.074 - 2.086: 13.7037% ( 1670) 00:15:37.525 2.086 - 2.098: 36.7095% ( 2812) 00:15:37.525 2.098 - 2.110: 39.2293% ( 308) 00:15:37.525 2.110 - 2.121: 51.4358% ( 1492) 00:15:37.525 2.121 - 2.133: 57.7518% ( 772) 00:15:37.525 2.133 - 2.145: 59.6008% ( 226) 00:15:37.525 2.145 - 2.157: 68.6820% ( 1110) 00:15:37.525 2.157 - 2.169: 74.0571% ( 657) 00:15:37.525 2.169 - 2.181: 75.4070% ( 165) 00:15:37.525 2.181 - 2.193: 79.4486% ( 494) 00:15:37.525 2.193 - 2.204: 81.0276% ( 193) 00:15:37.525 2.204 - 2.216: 81.5675% ( 66) 00:15:37.525 2.216 - 2.228: 85.1673% ( 440) 00:15:37.525 2.228 - 2.240: 89.0780% ( 478) 00:15:37.525 2.240 - 2.252: 90.4606% ( 169) 00:15:37.525 2.252 - 2.264: 92.1132% ( 202) 00:15:37.525 2.264 - 2.276: 92.8495% ( 90) 00:15:37.525 2.276 - 2.287: 93.1604% ( 38) 00:15:37.525 2.287 - 2.299: 93.7331% ( 70) 00:15:37.525 2.299 - 2.311: 94.5840% ( 104) 00:15:37.525 2.311 - 2.323: 95.1158% ( 65) 00:15:37.525 2.323 - 2.335: 95.1567% ( 5) 00:15:37.525 2.335 - 2.347: 95.1894% ( 4) 00:15:37.526 2.347 - 2.359: 95.2630% ( 9) 00:15:37.526 2.359 - 2.370: 95.3448% ( 10) 00:15:37.526 2.370 - 2.382: 95.5412% ( 24) 00:15:37.526 2.382 - 2.394: 95.8194% ( 34) 00:15:37.526 2.394 - 2.406: 96.1057% ( 35) 00:15:37.526 2.406 - 2.418: 96.2366% ( 16) 00:15:37.526 2.418 - 2.430: 96.4411% ( 25) 00:15:37.526 2.430 - 2.441: 96.6784% ( 29) 00:15:37.526 2.441 - 2.453: 96.9157% ( 29) 00:15:37.526 2.453 - 2.465: 97.1284% ( 26) 00:15:37.526 2.465 - 2.477: 97.3411% ( 26) 00:15:37.526 2.477 - 2.489: 97.5211% ( 22) 00:15:37.526 2.489 - 2.501: 97.6683% ( 18) 00:15:37.526 2.501 - 2.513: 97.8156% ( 18) 00:15:37.526 2.513 - 2.524: 97.8974% ( 10) 00:15:37.526 2.524 - 2.536: 97.9956% ( 12) 00:15:37.526 2.536 - 2.548: 98.0447% ( 6) 00:15:37.526 2.548 - 2.560: 98.1019% ( 7) 00:15:37.526 2.560 - 2.572: 98.1183% ( 2) 00:15:37.526 2.572 - 2.584: 98.1756% ( 7) 00:15:37.526 2.584 - 2.596: 98.1919% ( 2) 00:15:37.526 2.596 - 2.607: 98.2165% ( 3) 00:15:37.526 2.607 - 2.619: 98.2328% ( 2) 00:15:37.526 2.619 - 2.631: 98.2492% ( 2) 00:15:37.526 2.631 - 2.643: 98.2656% ( 2) 00:15:37.526 2.643 - 2.655: 98.2819% ( 2) 00:15:37.526 2.690 - 2.702: 98.2983% ( 2) 00:15:37.526 2.821 - 2.833: 98.3065% ( 1) 00:15:37.526 2.833 - 2.844: 98.3147% ( 1) 00:15:37.526 2.844 - 2.856: 98.3310% ( 2) 00:15:37.526 2.868 - 2.880: 98.3392% ( 1) 00:15:37.526 2.904 - 2.916: 98.3556% ( 2) 00:15:37.526 2.951 - 2.963: 9[2024-12-06 19:13:47.810578] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.526 8.3637% ( 1) 00:15:37.526 3.022 - 3.034: 98.3719% ( 1) 00:15:37.526 3.034 - 3.058: 98.3801% ( 1) 00:15:37.526 3.484 - 3.508: 98.3883% ( 1) 00:15:37.526 3.508 - 3.532: 98.4046% ( 2) 00:15:37.526 3.579 - 3.603: 98.4292% ( 3) 00:15:37.526 3.603 - 3.627: 98.4374% ( 1) 00:15:37.526 3.627 - 3.650: 98.4456% ( 1) 00:15:37.526 3.721 - 3.745: 98.4537% ( 1) 00:15:37.526 3.745 - 3.769: 98.4701% ( 2) 00:15:37.526 3.769 - 3.793: 98.4783% ( 1) 00:15:37.526 3.793 - 3.816: 98.5028% ( 3) 00:15:37.526 3.840 - 3.864: 98.5110% ( 1) 00:15:37.526 3.864 - 3.887: 98.5274% ( 2) 00:15:37.526 3.887 - 3.911: 98.5437% ( 2) 00:15:37.526 3.911 - 3.935: 98.5601% ( 2) 00:15:37.526 3.959 - 3.982: 98.5683% ( 1) 00:15:37.526 4.077 - 4.101: 98.5765% ( 1) 00:15:37.526 4.148 - 4.172: 98.5846% ( 1) 00:15:37.526 4.267 - 4.290: 98.5928% ( 1) 00:15:37.526 4.361 - 4.385: 98.6010% ( 1) 00:15:37.526 5.594 - 5.618: 98.6092% ( 1) 00:15:37.526 5.831 - 5.855: 98.6174% ( 1) 00:15:37.526 6.021 - 6.044: 98.6255% ( 1) 00:15:37.526 6.068 - 6.116: 98.6337% ( 1) 00:15:37.526 6.258 - 6.305: 98.6419% ( 1) 00:15:37.526 6.305 - 6.353: 98.6501% ( 1) 00:15:37.526 6.447 - 6.495: 98.6583% ( 1) 00:15:37.526 6.637 - 6.684: 98.6664% ( 1) 00:15:37.526 6.779 - 6.827: 98.6746% ( 1) 00:15:37.526 6.827 - 6.874: 98.6910% ( 2) 00:15:37.526 6.874 - 6.921: 98.6992% ( 1) 00:15:37.526 6.969 - 7.016: 98.7074% ( 1) 00:15:37.526 7.159 - 7.206: 98.7155% ( 1) 00:15:37.526 7.206 - 7.253: 98.7237% ( 1) 00:15:37.526 7.443 - 7.490: 98.7319% ( 1) 00:15:37.526 7.490 - 7.538: 98.7401% ( 1) 00:15:37.526 7.585 - 7.633: 98.7483% ( 1) 00:15:37.526 7.822 - 7.870: 98.7564% ( 1) 00:15:37.526 7.964 - 8.012: 98.7646% ( 1) 00:15:37.526 8.154 - 8.201: 98.7728% ( 1) 00:15:37.526 9.102 - 9.150: 98.7810% ( 1) 00:15:37.526 15.550 - 15.644: 98.7973% ( 2) 00:15:37.526 15.644 - 15.739: 98.8137% ( 2) 00:15:37.526 15.739 - 15.834: 98.8383% ( 3) 00:15:37.526 15.834 - 15.929: 98.8710% ( 4) 00:15:37.526 15.929 - 16.024: 98.9037% ( 4) 00:15:37.526 16.024 - 16.119: 98.9201% ( 2) 00:15:37.526 16.119 - 16.213: 98.9283% ( 1) 00:15:37.526 16.213 - 16.308: 98.9528% ( 3) 00:15:37.526 16.308 - 16.403: 98.9855% ( 4) 00:15:37.526 16.403 - 16.498: 99.0101% ( 3) 00:15:37.526 16.498 - 16.593: 99.0428% ( 4) 00:15:37.526 16.593 - 16.687: 99.1001% ( 7) 00:15:37.526 16.687 - 16.782: 99.1164% ( 2) 00:15:37.526 16.782 - 16.877: 99.1573% ( 5) 00:15:37.526 16.877 - 16.972: 99.1901% ( 4) 00:15:37.526 16.972 - 17.067: 99.2064% ( 2) 00:15:37.526 17.067 - 17.161: 99.2146% ( 1) 00:15:37.526 17.256 - 17.351: 99.2228% ( 1) 00:15:37.526 17.541 - 17.636: 99.2310% ( 1) 00:15:37.526 18.204 - 18.299: 99.2391% ( 1) 00:15:37.526 18.489 - 18.584: 99.2473% ( 1) 00:15:37.526 18.584 - 18.679: 99.2555% ( 1) 00:15:37.526 3021.938 - 3034.074: 99.2637% ( 1) 00:15:37.526 3082.619 - 3094.756: 99.2719% ( 1) 00:15:37.526 3470.981 - 3495.253: 99.2800% ( 1) 00:15:37.526 3980.705 - 4004.978: 99.8118% ( 65) 00:15:37.526 4004.978 - 4029.250: 100.0000% ( 23) 00:15:37.526 00:15:37.526 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:37.526 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:37.526 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:37.526 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:37.526 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.784 [ 00:15:37.784 { 00:15:37.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.784 "subtype": "Discovery", 00:15:37.784 "listen_addresses": [], 00:15:37.784 "allow_any_host": true, 00:15:37.784 "hosts": [] 00:15:37.784 }, 00:15:37.784 { 00:15:37.784 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.784 "subtype": "NVMe", 00:15:37.784 "listen_addresses": [ 00:15:37.784 { 00:15:37.784 "trtype": "VFIOUSER", 00:15:37.784 "adrfam": "IPv4", 00:15:37.784 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.784 "trsvcid": "0" 00:15:37.784 } 00:15:37.784 ], 00:15:37.784 "allow_any_host": true, 00:15:37.784 "hosts": [], 00:15:37.784 "serial_number": "SPDK1", 00:15:37.784 "model_number": "SPDK bdev Controller", 00:15:37.784 "max_namespaces": 32, 00:15:37.784 "min_cntlid": 1, 00:15:37.784 "max_cntlid": 65519, 00:15:37.784 "namespaces": [ 00:15:37.784 { 00:15:37.784 "nsid": 1, 00:15:37.784 "bdev_name": "Malloc1", 00:15:37.784 "name": "Malloc1", 00:15:37.784 "nguid": "8968673CD1914E2383E6174FE3B90331", 00:15:37.784 "uuid": "8968673c-d191-4e23-83e6-174fe3b90331" 00:15:37.784 }, 00:15:37.784 { 00:15:37.784 "nsid": 2, 00:15:37.784 "bdev_name": "Malloc3", 00:15:37.784 "name": "Malloc3", 00:15:37.784 "nguid": "FB2956323BBD44C588656BDF01684C42", 00:15:37.784 "uuid": "fb295632-3bbd-44c5-8865-6bdf01684c42" 00:15:37.784 } 00:15:37.784 ] 00:15:37.784 }, 00:15:37.784 { 00:15:37.784 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.784 "subtype": "NVMe", 00:15:37.784 "listen_addresses": [ 00:15:37.784 { 00:15:37.784 "trtype": "VFIOUSER", 00:15:37.784 "adrfam": "IPv4", 00:15:37.784 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.784 "trsvcid": "0" 00:15:37.784 } 00:15:37.784 ], 00:15:37.784 "allow_any_host": true, 00:15:37.784 "hosts": [], 00:15:37.784 "serial_number": "SPDK2", 00:15:37.784 "model_number": "SPDK bdev Controller", 00:15:37.784 "max_namespaces": 32, 00:15:37.784 "min_cntlid": 1, 00:15:37.784 "max_cntlid": 65519, 00:15:37.784 "namespaces": [ 00:15:37.784 { 00:15:37.784 "nsid": 1, 00:15:37.784 "bdev_name": "Malloc2", 00:15:37.784 "name": "Malloc2", 00:15:37.784 "nguid": "AB4D20D0A5194FF18EAB285407D8C3E2", 00:15:37.784 "uuid": "ab4d20d0-a519-4ff1-8eab-285407d8c3e2" 00:15:37.784 } 00:15:37.784 ] 00:15:37.784 } 00:15:37.784 ] 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1097758 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:37.784 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:37.784 [2024-12-06 19:13:48.343172] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.041 Malloc4 00:15:38.041 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:38.297 [2024-12-06 19:13:48.743144] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.297 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:38.297 Asynchronous Event Request test 00:15:38.297 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.297 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.297 Registering asynchronous event callbacks... 00:15:38.297 Starting namespace attribute notice tests for all controllers... 00:15:38.297 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:38.297 aer_cb - Changed Namespace 00:15:38.297 Cleaning up... 00:15:38.556 [ 00:15:38.556 { 00:15:38.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:38.556 "subtype": "Discovery", 00:15:38.556 "listen_addresses": [], 00:15:38.556 "allow_any_host": true, 00:15:38.556 "hosts": [] 00:15:38.556 }, 00:15:38.556 { 00:15:38.556 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:38.556 "subtype": "NVMe", 00:15:38.556 "listen_addresses": [ 00:15:38.556 { 00:15:38.556 "trtype": "VFIOUSER", 00:15:38.556 "adrfam": "IPv4", 00:15:38.556 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:38.556 "trsvcid": "0" 00:15:38.556 } 00:15:38.556 ], 00:15:38.556 "allow_any_host": true, 00:15:38.556 "hosts": [], 00:15:38.556 "serial_number": "SPDK1", 00:15:38.556 "model_number": "SPDK bdev Controller", 00:15:38.556 "max_namespaces": 32, 00:15:38.556 "min_cntlid": 1, 00:15:38.556 "max_cntlid": 65519, 00:15:38.556 "namespaces": [ 00:15:38.556 { 00:15:38.556 "nsid": 1, 00:15:38.556 "bdev_name": "Malloc1", 00:15:38.556 "name": "Malloc1", 00:15:38.556 "nguid": "8968673CD1914E2383E6174FE3B90331", 00:15:38.556 "uuid": "8968673c-d191-4e23-83e6-174fe3b90331" 00:15:38.556 }, 00:15:38.556 { 00:15:38.556 "nsid": 2, 00:15:38.556 "bdev_name": "Malloc3", 00:15:38.556 "name": "Malloc3", 00:15:38.556 "nguid": "FB2956323BBD44C588656BDF01684C42", 00:15:38.556 "uuid": "fb295632-3bbd-44c5-8865-6bdf01684c42" 00:15:38.556 } 00:15:38.556 ] 00:15:38.556 }, 00:15:38.556 { 00:15:38.556 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:38.556 "subtype": "NVMe", 00:15:38.556 "listen_addresses": [ 00:15:38.556 { 00:15:38.556 "trtype": "VFIOUSER", 00:15:38.556 "adrfam": "IPv4", 00:15:38.556 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:38.556 "trsvcid": "0" 00:15:38.556 } 00:15:38.556 ], 00:15:38.556 "allow_any_host": true, 00:15:38.556 "hosts": [], 00:15:38.556 "serial_number": "SPDK2", 00:15:38.556 "model_number": "SPDK bdev Controller", 00:15:38.556 "max_namespaces": 32, 00:15:38.556 "min_cntlid": 1, 00:15:38.556 "max_cntlid": 65519, 00:15:38.556 "namespaces": [ 00:15:38.556 { 00:15:38.556 "nsid": 1, 00:15:38.556 "bdev_name": "Malloc2", 00:15:38.556 "name": "Malloc2", 00:15:38.556 "nguid": "AB4D20D0A5194FF18EAB285407D8C3E2", 00:15:38.556 "uuid": "ab4d20d0-a519-4ff1-8eab-285407d8c3e2" 00:15:38.556 }, 00:15:38.556 { 00:15:38.556 "nsid": 2, 00:15:38.556 "bdev_name": "Malloc4", 00:15:38.556 "name": "Malloc4", 00:15:38.556 "nguid": "B257471F7FFD49DEA68C29AD1198972C", 00:15:38.556 "uuid": "b257471f-7ffd-49de-a68c-29ad1198972c" 00:15:38.556 } 00:15:38.556 ] 00:15:38.556 } 00:15:38.556 ] 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1097758 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1092156 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1092156 ']' 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1092156 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092156 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092156' 00:15:38.556 killing process with pid 1092156 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1092156 00:15:38.556 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1092156 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1097900 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1097900' 00:15:39.124 Process pid: 1097900 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1097900 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1097900 ']' 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.124 [2024-12-06 19:13:49.452219] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:39.124 [2024-12-06 19:13:49.453231] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:15:39.124 [2024-12-06 19:13:49.453299] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.124 [2024-12-06 19:13:49.517724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.124 [2024-12-06 19:13:49.573868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.124 [2024-12-06 19:13:49.573929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.124 [2024-12-06 19:13:49.573953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.124 [2024-12-06 19:13:49.573964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.124 [2024-12-06 19:13:49.573973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.124 [2024-12-06 19:13:49.575493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.124 [2024-12-06 19:13:49.575614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.124 [2024-12-06 19:13:49.575688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.124 [2024-12-06 19:13:49.575692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.124 [2024-12-06 19:13:49.666164] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:39.124 [2024-12-06 19:13:49.666690] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:39.124 [2024-12-06 19:13:49.667293] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:39.124 [2024-12-06 19:13:49.667513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:39.124 [2024-12-06 19:13:49.669879] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:39.124 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:40.535 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:40.535 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:40.535 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:40.535 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.535 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:40.535 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.792 Malloc1 00:15:40.792 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:41.049 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:41.616 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:41.616 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.616 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:41.616 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.874 Malloc2 00:15:41.874 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:42.439 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:42.439 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1097900 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1097900 ']' 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1097900 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.697 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097900 00:15:42.979 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.979 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.979 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097900' 00:15:42.979 killing process with pid 1097900 00:15:42.979 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1097900 00:15:42.979 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1097900 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.269 00:15:43.269 real 0m53.456s 00:15:43.269 user 3m26.634s 00:15:43.269 sys 0m3.878s 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:43.269 ************************************ 00:15:43.269 END TEST nvmf_vfio_user 00:15:43.269 ************************************ 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.269 ************************************ 00:15:43.269 START TEST nvmf_vfio_user_nvme_compliance 00:15:43.269 ************************************ 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:43.269 * Looking for test storage... 00:15:43.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.269 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.270 --rc genhtml_branch_coverage=1 00:15:43.270 --rc genhtml_function_coverage=1 00:15:43.270 --rc genhtml_legend=1 00:15:43.270 --rc geninfo_all_blocks=1 00:15:43.270 --rc geninfo_unexecuted_blocks=1 00:15:43.270 00:15:43.270 ' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.270 --rc genhtml_branch_coverage=1 00:15:43.270 --rc genhtml_function_coverage=1 00:15:43.270 --rc genhtml_legend=1 00:15:43.270 --rc geninfo_all_blocks=1 00:15:43.270 --rc geninfo_unexecuted_blocks=1 00:15:43.270 00:15:43.270 ' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.270 --rc genhtml_branch_coverage=1 00:15:43.270 --rc genhtml_function_coverage=1 00:15:43.270 --rc genhtml_legend=1 00:15:43.270 --rc geninfo_all_blocks=1 00:15:43.270 --rc geninfo_unexecuted_blocks=1 00:15:43.270 00:15:43.270 ' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.270 --rc genhtml_branch_coverage=1 00:15:43.270 --rc genhtml_function_coverage=1 00:15:43.270 --rc genhtml_legend=1 00:15:43.270 --rc geninfo_all_blocks=1 00:15:43.270 --rc geninfo_unexecuted_blocks=1 00:15:43.270 00:15:43.270 ' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1098514 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1098514' 00:15:43.270 Process pid: 1098514 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1098514 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1098514 ']' 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.270 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.271 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.271 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.271 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.529 [2024-12-06 19:13:53.853689] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:15:43.529 [2024-12-06 19:13:53.853776] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.529 [2024-12-06 19:13:53.922874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.529 [2024-12-06 19:13:53.980083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.529 [2024-12-06 19:13:53.980138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.529 [2024-12-06 19:13:53.980166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.529 [2024-12-06 19:13:53.980177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.529 [2024-12-06 19:13:53.980186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.529 [2024-12-06 19:13:53.981502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.529 [2024-12-06 19:13:53.981567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.529 [2024-12-06 19:13:53.981571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.529 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.529 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:43.529 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.907 malloc0 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.907 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:44.907 00:15:44.907 00:15:44.907 CUnit - A unit testing framework for C - Version 2.1-3 00:15:44.907 http://cunit.sourceforge.net/ 00:15:44.907 00:15:44.907 00:15:44.907 Suite: nvme_compliance 00:15:44.907 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 19:13:55.348477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.907 [2024-12-06 19:13:55.350044] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:44.907 [2024-12-06 19:13:55.350068] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:44.907 [2024-12-06 19:13:55.350095] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:44.907 [2024-12-06 19:13:55.354519] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.907 passed 00:15:44.907 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 19:13:55.438163] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.907 [2024-12-06 19:13:55.441185] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.907 passed 00:15:45.166 Test: admin_identify_ns ...[2024-12-06 19:13:55.528234] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.166 [2024-12-06 19:13:55.588681] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:45.166 [2024-12-06 19:13:55.596696] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:45.166 [2024-12-06 19:13:55.617815] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.166 passed 00:15:45.166 Test: admin_get_features_mandatory_features ...[2024-12-06 19:13:55.701583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.166 [2024-12-06 19:13:55.704604] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.166 passed 00:15:45.426 Test: admin_get_features_optional_features ...[2024-12-06 19:13:55.789168] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.426 [2024-12-06 19:13:55.792191] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.426 passed 00:15:45.426 Test: admin_set_features_number_of_queues ...[2024-12-06 19:13:55.877242] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.426 [2024-12-06 19:13:55.981780] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.685 passed 00:15:45.685 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 19:13:56.066798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.685 [2024-12-06 19:13:56.069824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.685 passed 00:15:45.685 Test: admin_get_log_page_with_lpo ...[2024-12-06 19:13:56.151045] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.685 [2024-12-06 19:13:56.217699] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:45.685 [2024-12-06 19:13:56.230764] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.944 passed 00:15:45.945 Test: fabric_property_get ...[2024-12-06 19:13:56.315659] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.945 [2024-12-06 19:13:56.316949] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:45.945 [2024-12-06 19:13:56.321707] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.945 passed 00:15:45.945 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 19:13:56.405272] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.945 [2024-12-06 19:13:56.406567] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:45.945 [2024-12-06 19:13:56.408292] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.945 passed 00:15:45.945 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 19:13:56.491273] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.203 [2024-12-06 19:13:56.574689] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.203 [2024-12-06 19:13:56.590677] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.203 [2024-12-06 19:13:56.595795] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.203 passed 00:15:46.203 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 19:13:56.679425] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.203 [2024-12-06 19:13:56.680750] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:46.203 [2024-12-06 19:13:56.682448] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.203 passed 00:15:46.203 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 19:13:56.763678] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.461 [2024-12-06 19:13:56.841674] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:46.461 [2024-12-06 19:13:56.865689] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.461 [2024-12-06 19:13:56.870788] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.461 passed 00:15:46.461 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 19:13:56.954484] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.461 [2024-12-06 19:13:56.955830] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:46.461 [2024-12-06 19:13:56.955869] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:46.461 [2024-12-06 19:13:56.957509] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.461 passed 00:15:46.719 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 19:13:57.038841] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.719 [2024-12-06 19:13:57.134675] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:46.719 [2024-12-06 19:13:57.142703] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:46.719 [2024-12-06 19:13:57.150703] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:46.719 [2024-12-06 19:13:57.158677] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:46.719 [2024-12-06 19:13:57.187781] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.719 passed 00:15:46.719 Test: admin_create_io_sq_verify_pc ...[2024-12-06 19:13:57.267336] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.719 [2024-12-06 19:13:57.286689] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:46.978 [2024-12-06 19:13:57.304700] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.978 passed 00:15:46.978 Test: admin_create_io_qp_max_qps ...[2024-12-06 19:13:57.386267] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.356 [2024-12-06 19:13:58.498682] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:48.356 [2024-12-06 19:13:58.893275] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.356 passed 00:15:48.616 Test: admin_create_io_sq_shared_cq ...[2024-12-06 19:13:58.978179] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.616 [2024-12-06 19:13:59.109690] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:48.616 [2024-12-06 19:13:59.146780] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.616 passed 00:15:48.616 00:15:48.616 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.616 suites 1 1 n/a 0 0 00:15:48.616 tests 18 18 18 0 0 00:15:48.616 asserts 360 360 360 0 n/a 00:15:48.616 00:15:48.616 Elapsed time = 1.575 seconds 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1098514 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1098514 ']' 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1098514 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098514 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098514' 00:15:48.876 killing process with pid 1098514 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1098514 00:15:48.876 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1098514 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:49.137 00:15:49.137 real 0m5.861s 00:15:49.137 user 0m16.458s 00:15:49.137 sys 0m0.566s 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.137 ************************************ 00:15:49.137 END TEST nvmf_vfio_user_nvme_compliance 00:15:49.137 ************************************ 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.137 ************************************ 00:15:49.137 START TEST nvmf_vfio_user_fuzz 00:15:49.137 ************************************ 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:49.137 * Looking for test storage... 00:15:49.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.137 --rc genhtml_branch_coverage=1 00:15:49.137 --rc genhtml_function_coverage=1 00:15:49.137 --rc genhtml_legend=1 00:15:49.137 --rc geninfo_all_blocks=1 00:15:49.137 --rc geninfo_unexecuted_blocks=1 00:15:49.137 00:15:49.137 ' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.137 --rc genhtml_branch_coverage=1 00:15:49.137 --rc genhtml_function_coverage=1 00:15:49.137 --rc genhtml_legend=1 00:15:49.137 --rc geninfo_all_blocks=1 00:15:49.137 --rc geninfo_unexecuted_blocks=1 00:15:49.137 00:15:49.137 ' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.137 --rc genhtml_branch_coverage=1 00:15:49.137 --rc genhtml_function_coverage=1 00:15:49.137 --rc genhtml_legend=1 00:15:49.137 --rc geninfo_all_blocks=1 00:15:49.137 --rc geninfo_unexecuted_blocks=1 00:15:49.137 00:15:49.137 ' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.137 --rc genhtml_branch_coverage=1 00:15:49.137 --rc genhtml_function_coverage=1 00:15:49.137 --rc genhtml_legend=1 00:15:49.137 --rc geninfo_all_blocks=1 00:15:49.137 --rc geninfo_unexecuted_blocks=1 00:15:49.137 00:15:49.137 ' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.137 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1099261 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1099261' 00:15:49.138 Process pid: 1099261 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1099261 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1099261 ']' 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.138 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.708 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.708 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:49.708 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:50.648 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:50.648 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.648 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.648 malloc0 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:50.648 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:22.710 Fuzzing completed. Shutting down the fuzz application 00:16:22.711 00:16:22.711 Dumping successful admin opcodes: 00:16:22.711 9, 10, 00:16:22.711 Dumping successful io opcodes: 00:16:22.711 0, 00:16:22.711 NS: 0x20000081ef00 I/O qp, Total commands completed: 607436, total successful commands: 2349, random_seed: 2223070528 00:16:22.711 NS: 0x20000081ef00 admin qp, Total commands completed: 149152, total successful commands: 32, random_seed: 3366069952 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1099261 ']' 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1099261' 00:16:22.711 killing process with pid 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1099261 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:22.711 00:16:22.711 real 0m32.272s 00:16:22.711 user 0m30.705s 00:16:22.711 sys 0m28.844s 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:22.711 ************************************ 00:16:22.711 END TEST nvmf_vfio_user_fuzz 00:16:22.711 ************************************ 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.711 ************************************ 00:16:22.711 START TEST nvmf_auth_target 00:16:22.711 ************************************ 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:22.711 * Looking for test storage... 00:16:22.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:22.711 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:22.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.711 --rc genhtml_branch_coverage=1 00:16:22.711 --rc genhtml_function_coverage=1 00:16:22.711 --rc genhtml_legend=1 00:16:22.711 --rc geninfo_all_blocks=1 00:16:22.711 --rc geninfo_unexecuted_blocks=1 00:16:22.711 00:16:22.711 ' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:22.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.711 --rc genhtml_branch_coverage=1 00:16:22.711 --rc genhtml_function_coverage=1 00:16:22.711 --rc genhtml_legend=1 00:16:22.711 --rc geninfo_all_blocks=1 00:16:22.711 --rc geninfo_unexecuted_blocks=1 00:16:22.711 00:16:22.711 ' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:22.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.711 --rc genhtml_branch_coverage=1 00:16:22.711 --rc genhtml_function_coverage=1 00:16:22.711 --rc genhtml_legend=1 00:16:22.711 --rc geninfo_all_blocks=1 00:16:22.711 --rc geninfo_unexecuted_blocks=1 00:16:22.711 00:16:22.711 ' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:22.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.711 --rc genhtml_branch_coverage=1 00:16:22.711 --rc genhtml_function_coverage=1 00:16:22.711 --rc genhtml_legend=1 00:16:22.711 --rc geninfo_all_blocks=1 00:16:22.711 --rc geninfo_unexecuted_blocks=1 00:16:22.711 00:16:22.711 ' 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.711 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:22.712 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:23.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:23.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.653 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:23.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:23.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.654 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:23.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:16:23.913 00:16:23.913 --- 10.0.0.2 ping statistics --- 00:16:23.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.913 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:16:23.913 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:16:23.913 00:16:23.913 --- 10.0.0.1 ping statistics --- 00:16:23.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.914 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1104818 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1104818 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1104818 ']' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.914 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1104841 00:16:24.172 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2ca8450cdcaf317df3b4626236e10608dcebad95f7d6edc 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.syz 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2ca8450cdcaf317df3b4626236e10608dcebad95f7d6edc 0 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2ca8450cdcaf317df3b4626236e10608dcebad95f7d6edc 0 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2ca8450cdcaf317df3b4626236e10608dcebad95f7d6edc 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.syz 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.syz 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.syz 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d02d618c0b63ad81d0abce069462b6e0d4809f49ad78354f5fd8e4deeb7950f 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vTc 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d02d618c0b63ad81d0abce069462b6e0d4809f49ad78354f5fd8e4deeb7950f 3 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d02d618c0b63ad81d0abce069462b6e0d4809f49ad78354f5fd8e4deeb7950f 3 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d02d618c0b63ad81d0abce069462b6e0d4809f49ad78354f5fd8e4deeb7950f 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.432 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vTc 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vTc 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vTc 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=96e8355f15bcec685acee5debfc82a35 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FJY 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 96e8355f15bcec685acee5debfc82a35 1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 96e8355f15bcec685acee5debfc82a35 1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=96e8355f15bcec685acee5debfc82a35 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FJY 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FJY 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.FJY 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4b71539b61c34c4aa63bc60ab8ddcd6a13bb1982102ec10 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k13 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4b71539b61c34c4aa63bc60ab8ddcd6a13bb1982102ec10 2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4b71539b61c34c4aa63bc60ab8ddcd6a13bb1982102ec10 2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4b71539b61c34c4aa63bc60ab8ddcd6a13bb1982102ec10 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k13 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k13 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k13 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b864005d59fd448041a2c338c1437fbaa6c59ac5f7c6423 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VmU 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b864005d59fd448041a2c338c1437fbaa6c59ac5f7c6423 2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b864005d59fd448041a2c338c1437fbaa6c59ac5f7c6423 2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b864005d59fd448041a2c338c1437fbaa6c59ac5f7c6423 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VmU 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VmU 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.VmU 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.433 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=28b09b56d2facefcd2e71d58196d2412 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.V11 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 28b09b56d2facefcd2e71d58196d2412 1 00:16:24.433 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 28b09b56d2facefcd2e71d58196d2412 1 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=28b09b56d2facefcd2e71d58196d2412 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.V11 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.V11 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.V11 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1fa80c3277e8333b31a3e2b4612c4f179d8df25965ff460850d354e42479eedf 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cUJ 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1fa80c3277e8333b31a3e2b4612c4f179d8df25965ff460850d354e42479eedf 3 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1fa80c3277e8333b31a3e2b4612c4f179d8df25965ff460850d354e42479eedf 3 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1fa80c3277e8333b31a3e2b4612c4f179d8df25965ff460850d354e42479eedf 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cUJ 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cUJ 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.cUJ 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1104818 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1104818 ']' 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.693 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1104841 /var/tmp/host.sock 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1104841 ']' 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.965 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.syz 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.syz 00:16:25.222 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.syz 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vTc ]] 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vTc 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vTc 00:16:25.480 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vTc 00:16:25.738 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.738 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.FJY 00:16:25.739 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.739 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.739 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.739 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.FJY 00:16:25.739 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.FJY 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k13 ]] 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k13 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k13 00:16:25.997 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k13 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VmU 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.VmU 00:16:26.255 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.VmU 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.V11 ]] 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.V11 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.V11 00:16:26.514 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.V11 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cUJ 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cUJ 00:16:26.773 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cUJ 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.032 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.601 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.860 00:16:27.860 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.860 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.860 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.118 { 00:16:28.118 "cntlid": 1, 00:16:28.118 "qid": 0, 00:16:28.118 "state": "enabled", 00:16:28.118 "thread": "nvmf_tgt_poll_group_000", 00:16:28.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.118 "listen_address": { 00:16:28.118 "trtype": "TCP", 00:16:28.118 "adrfam": "IPv4", 00:16:28.118 "traddr": "10.0.0.2", 00:16:28.118 "trsvcid": "4420" 00:16:28.118 }, 00:16:28.118 "peer_address": { 00:16:28.118 "trtype": "TCP", 00:16:28.118 "adrfam": "IPv4", 00:16:28.118 "traddr": "10.0.0.1", 00:16:28.118 "trsvcid": "42554" 00:16:28.118 }, 00:16:28.118 "auth": { 00:16:28.118 "state": "completed", 00:16:28.118 "digest": "sha256", 00:16:28.118 "dhgroup": "null" 00:16:28.118 } 00:16:28.118 } 00:16:28.118 ]' 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.118 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.378 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:28.378 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.343 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.600 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.859 00:16:30.117 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.117 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.117 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.378 { 00:16:30.378 "cntlid": 3, 00:16:30.378 "qid": 0, 00:16:30.378 "state": "enabled", 00:16:30.378 "thread": "nvmf_tgt_poll_group_000", 00:16:30.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:30.378 "listen_address": { 00:16:30.378 "trtype": "TCP", 00:16:30.378 "adrfam": "IPv4", 00:16:30.378 "traddr": "10.0.0.2", 00:16:30.378 "trsvcid": "4420" 00:16:30.378 }, 00:16:30.378 "peer_address": { 00:16:30.378 "trtype": "TCP", 00:16:30.378 "adrfam": "IPv4", 00:16:30.378 "traddr": "10.0.0.1", 00:16:30.378 "trsvcid": "42570" 00:16:30.378 }, 00:16:30.378 "auth": { 00:16:30.378 "state": "completed", 00:16:30.378 "digest": "sha256", 00:16:30.378 "dhgroup": "null" 00:16:30.378 } 00:16:30.378 } 00:16:30.378 ]' 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.378 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.638 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:30.638 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.576 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.576 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.835 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.093 00:16:32.093 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.093 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.093 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.352 { 00:16:32.352 "cntlid": 5, 00:16:32.352 "qid": 0, 00:16:32.352 "state": "enabled", 00:16:32.352 "thread": "nvmf_tgt_poll_group_000", 00:16:32.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.352 "listen_address": { 00:16:32.352 "trtype": "TCP", 00:16:32.352 "adrfam": "IPv4", 00:16:32.352 "traddr": "10.0.0.2", 00:16:32.352 "trsvcid": "4420" 00:16:32.352 }, 00:16:32.352 "peer_address": { 00:16:32.352 "trtype": "TCP", 00:16:32.352 "adrfam": "IPv4", 00:16:32.352 "traddr": "10.0.0.1", 00:16:32.352 "trsvcid": "42606" 00:16:32.352 }, 00:16:32.352 "auth": { 00:16:32.352 "state": "completed", 00:16:32.352 "digest": "sha256", 00:16:32.352 "dhgroup": "null" 00:16:32.352 } 00:16:32.352 } 00:16:32.352 ]' 00:16:32.352 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.630 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.630 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.630 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.630 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.630 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.630 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.630 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.889 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:32.889 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:33.827 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.827 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.827 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.828 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.828 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.828 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.828 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.828 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.086 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.344 00:16:34.345 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.345 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.345 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.602 { 00:16:34.602 "cntlid": 7, 00:16:34.602 "qid": 0, 00:16:34.602 "state": "enabled", 00:16:34.602 "thread": "nvmf_tgt_poll_group_000", 00:16:34.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.602 "listen_address": { 00:16:34.602 "trtype": "TCP", 00:16:34.602 "adrfam": "IPv4", 00:16:34.602 "traddr": "10.0.0.2", 00:16:34.602 "trsvcid": "4420" 00:16:34.602 }, 00:16:34.602 "peer_address": { 00:16:34.602 "trtype": "TCP", 00:16:34.602 "adrfam": "IPv4", 00:16:34.602 "traddr": "10.0.0.1", 00:16:34.602 "trsvcid": "42632" 00:16:34.602 }, 00:16:34.602 "auth": { 00:16:34.602 "state": "completed", 00:16:34.602 "digest": "sha256", 00:16:34.602 "dhgroup": "null" 00:16:34.602 } 00:16:34.602 } 00:16:34.602 ]' 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.602 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.881 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.881 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.881 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.881 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.881 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.141 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:35.141 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.077 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.336 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.595 00:16:36.595 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.595 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.595 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.854 { 00:16:36.854 "cntlid": 9, 00:16:36.854 "qid": 0, 00:16:36.854 "state": "enabled", 00:16:36.854 "thread": "nvmf_tgt_poll_group_000", 00:16:36.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:36.854 "listen_address": { 00:16:36.854 "trtype": "TCP", 00:16:36.854 "adrfam": "IPv4", 00:16:36.854 "traddr": "10.0.0.2", 00:16:36.854 "trsvcid": "4420" 00:16:36.854 }, 00:16:36.854 "peer_address": { 00:16:36.854 "trtype": "TCP", 00:16:36.854 "adrfam": "IPv4", 00:16:36.854 "traddr": "10.0.0.1", 00:16:36.854 "trsvcid": "46574" 00:16:36.854 }, 00:16:36.854 "auth": { 00:16:36.854 "state": "completed", 00:16:36.854 "digest": "sha256", 00:16:36.854 "dhgroup": "ffdhe2048" 00:16:36.854 } 00:16:36.854 } 00:16:36.854 ]' 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.854 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.112 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:37.112 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.045 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.303 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.871 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.871 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.871 { 00:16:38.871 "cntlid": 11, 00:16:38.871 "qid": 0, 00:16:38.871 "state": "enabled", 00:16:38.871 "thread": "nvmf_tgt_poll_group_000", 00:16:38.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:38.871 "listen_address": { 00:16:38.871 "trtype": "TCP", 00:16:38.871 "adrfam": "IPv4", 00:16:38.871 "traddr": "10.0.0.2", 00:16:38.871 "trsvcid": "4420" 00:16:38.871 }, 00:16:38.871 "peer_address": { 00:16:38.871 "trtype": "TCP", 00:16:38.871 "adrfam": "IPv4", 00:16:38.871 "traddr": "10.0.0.1", 00:16:38.871 "trsvcid": "46590" 00:16:38.871 }, 00:16:38.871 "auth": { 00:16:38.871 "state": "completed", 00:16:38.871 "digest": "sha256", 00:16:38.871 "dhgroup": "ffdhe2048" 00:16:38.871 } 00:16:38.871 } 00:16:38.871 ]' 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.130 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.417 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:39.417 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.378 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.636 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.894 00:16:40.894 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.894 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.894 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.152 { 00:16:41.152 "cntlid": 13, 00:16:41.152 "qid": 0, 00:16:41.152 "state": "enabled", 00:16:41.152 "thread": "nvmf_tgt_poll_group_000", 00:16:41.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:41.152 "listen_address": { 00:16:41.152 "trtype": "TCP", 00:16:41.152 "adrfam": "IPv4", 00:16:41.152 "traddr": "10.0.0.2", 00:16:41.152 "trsvcid": "4420" 00:16:41.152 }, 00:16:41.152 "peer_address": { 00:16:41.152 "trtype": "TCP", 00:16:41.152 "adrfam": "IPv4", 00:16:41.152 "traddr": "10.0.0.1", 00:16:41.152 "trsvcid": "46626" 00:16:41.152 }, 00:16:41.152 "auth": { 00:16:41.152 "state": "completed", 00:16:41.152 "digest": "sha256", 00:16:41.152 "dhgroup": "ffdhe2048" 00:16:41.152 } 00:16:41.152 } 00:16:41.152 ]' 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.152 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.411 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.411 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.411 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.411 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.411 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.669 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:41.669 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.605 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.863 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.121 00:16:43.121 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.121 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.121 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.378 { 00:16:43.378 "cntlid": 15, 00:16:43.378 "qid": 0, 00:16:43.378 "state": "enabled", 00:16:43.378 "thread": "nvmf_tgt_poll_group_000", 00:16:43.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:43.378 "listen_address": { 00:16:43.378 "trtype": "TCP", 00:16:43.378 "adrfam": "IPv4", 00:16:43.378 "traddr": "10.0.0.2", 00:16:43.378 "trsvcid": "4420" 00:16:43.378 }, 00:16:43.378 "peer_address": { 00:16:43.378 "trtype": "TCP", 00:16:43.378 "adrfam": "IPv4", 00:16:43.378 "traddr": "10.0.0.1", 00:16:43.378 "trsvcid": "46640" 00:16:43.378 }, 00:16:43.378 "auth": { 00:16:43.378 "state": "completed", 00:16:43.378 "digest": "sha256", 00:16:43.378 "dhgroup": "ffdhe2048" 00:16:43.378 } 00:16:43.378 } 00:16:43.378 ]' 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.378 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.635 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.635 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.635 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.635 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.635 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.893 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:43.893 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.829 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.086 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.343 00:16:45.343 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.343 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.343 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.600 { 00:16:45.600 "cntlid": 17, 00:16:45.600 "qid": 0, 00:16:45.600 "state": "enabled", 00:16:45.600 "thread": "nvmf_tgt_poll_group_000", 00:16:45.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.600 "listen_address": { 00:16:45.600 "trtype": "TCP", 00:16:45.600 "adrfam": "IPv4", 00:16:45.600 "traddr": "10.0.0.2", 00:16:45.600 "trsvcid": "4420" 00:16:45.600 }, 00:16:45.600 "peer_address": { 00:16:45.600 "trtype": "TCP", 00:16:45.600 "adrfam": "IPv4", 00:16:45.600 "traddr": "10.0.0.1", 00:16:45.600 "trsvcid": "55348" 00:16:45.600 }, 00:16:45.600 "auth": { 00:16:45.600 "state": "completed", 00:16:45.600 "digest": "sha256", 00:16:45.600 "dhgroup": "ffdhe3072" 00:16:45.600 } 00:16:45.600 } 00:16:45.600 ]' 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.600 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.857 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.857 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.857 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.114 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:46.114 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.051 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.308 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.565 00:16:47.565 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.565 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.565 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.822 { 00:16:47.822 "cntlid": 19, 00:16:47.822 "qid": 0, 00:16:47.822 "state": "enabled", 00:16:47.822 "thread": "nvmf_tgt_poll_group_000", 00:16:47.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:47.822 "listen_address": { 00:16:47.822 "trtype": "TCP", 00:16:47.822 "adrfam": "IPv4", 00:16:47.822 "traddr": "10.0.0.2", 00:16:47.822 "trsvcid": "4420" 00:16:47.822 }, 00:16:47.822 "peer_address": { 00:16:47.822 "trtype": "TCP", 00:16:47.822 "adrfam": "IPv4", 00:16:47.822 "traddr": "10.0.0.1", 00:16:47.822 "trsvcid": "55368" 00:16:47.822 }, 00:16:47.822 "auth": { 00:16:47.822 "state": "completed", 00:16:47.822 "digest": "sha256", 00:16:47.822 "dhgroup": "ffdhe3072" 00:16:47.822 } 00:16:47.822 } 00:16:47.822 ]' 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.822 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.388 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:48.388 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:48.956 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.220 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.480 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.738 00:16:49.738 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.738 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.738 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.997 { 00:16:49.997 "cntlid": 21, 00:16:49.997 "qid": 0, 00:16:49.997 "state": "enabled", 00:16:49.997 "thread": "nvmf_tgt_poll_group_000", 00:16:49.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.997 "listen_address": { 00:16:49.997 "trtype": "TCP", 00:16:49.997 "adrfam": "IPv4", 00:16:49.997 "traddr": "10.0.0.2", 00:16:49.997 "trsvcid": "4420" 00:16:49.997 }, 00:16:49.997 "peer_address": { 00:16:49.997 "trtype": "TCP", 00:16:49.997 "adrfam": "IPv4", 00:16:49.997 "traddr": "10.0.0.1", 00:16:49.997 "trsvcid": "55392" 00:16:49.997 }, 00:16:49.997 "auth": { 00:16:49.997 "state": "completed", 00:16:49.997 "digest": "sha256", 00:16:49.997 "dhgroup": "ffdhe3072" 00:16:49.997 } 00:16:49.997 } 00:16:49.997 ]' 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.997 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.255 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.255 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.255 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.255 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.514 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:50.514 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.448 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.707 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.965 00:16:51.965 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.965 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.965 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.223 { 00:16:52.223 "cntlid": 23, 00:16:52.223 "qid": 0, 00:16:52.223 "state": "enabled", 00:16:52.223 "thread": "nvmf_tgt_poll_group_000", 00:16:52.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.223 "listen_address": { 00:16:52.223 "trtype": "TCP", 00:16:52.223 "adrfam": "IPv4", 00:16:52.223 "traddr": "10.0.0.2", 00:16:52.223 "trsvcid": "4420" 00:16:52.223 }, 00:16:52.223 "peer_address": { 00:16:52.223 "trtype": "TCP", 00:16:52.223 "adrfam": "IPv4", 00:16:52.223 "traddr": "10.0.0.1", 00:16:52.223 "trsvcid": "55420" 00:16:52.223 }, 00:16:52.223 "auth": { 00:16:52.223 "state": "completed", 00:16:52.223 "digest": "sha256", 00:16:52.223 "dhgroup": "ffdhe3072" 00:16:52.223 } 00:16:52.223 } 00:16:52.223 ]' 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.223 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.482 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.482 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.482 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.740 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:52.740 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.684 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.684 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.250 00:16:54.250 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.250 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.250 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.509 { 00:16:54.509 "cntlid": 25, 00:16:54.509 "qid": 0, 00:16:54.509 "state": "enabled", 00:16:54.509 "thread": "nvmf_tgt_poll_group_000", 00:16:54.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.509 "listen_address": { 00:16:54.509 "trtype": "TCP", 00:16:54.509 "adrfam": "IPv4", 00:16:54.509 "traddr": "10.0.0.2", 00:16:54.509 "trsvcid": "4420" 00:16:54.509 }, 00:16:54.509 "peer_address": { 00:16:54.509 "trtype": "TCP", 00:16:54.509 "adrfam": "IPv4", 00:16:54.509 "traddr": "10.0.0.1", 00:16:54.509 "trsvcid": "55452" 00:16:54.509 }, 00:16:54.509 "auth": { 00:16:54.509 "state": "completed", 00:16:54.509 "digest": "sha256", 00:16:54.509 "dhgroup": "ffdhe4096" 00:16:54.509 } 00:16:54.509 } 00:16:54.509 ]' 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.509 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.509 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.509 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.509 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.509 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.509 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.078 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:55.078 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.016 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.583 00:16:56.583 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.583 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.583 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.841 { 00:16:56.841 "cntlid": 27, 00:16:56.841 "qid": 0, 00:16:56.841 "state": "enabled", 00:16:56.841 "thread": "nvmf_tgt_poll_group_000", 00:16:56.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.841 "listen_address": { 00:16:56.841 "trtype": "TCP", 00:16:56.841 "adrfam": "IPv4", 00:16:56.841 "traddr": "10.0.0.2", 00:16:56.841 "trsvcid": "4420" 00:16:56.841 }, 00:16:56.841 "peer_address": { 00:16:56.841 "trtype": "TCP", 00:16:56.841 "adrfam": "IPv4", 00:16:56.841 "traddr": "10.0.0.1", 00:16:56.841 "trsvcid": "41456" 00:16:56.841 }, 00:16:56.841 "auth": { 00:16:56.841 "state": "completed", 00:16:56.841 "digest": "sha256", 00:16:56.841 "dhgroup": "ffdhe4096" 00:16:56.841 } 00:16:56.841 } 00:16:56.841 ]' 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.841 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.100 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.100 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.100 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.359 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:57.359 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.294 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.553 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.810 00:16:58.810 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.810 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.810 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.067 { 00:16:59.067 "cntlid": 29, 00:16:59.067 "qid": 0, 00:16:59.067 "state": "enabled", 00:16:59.067 "thread": "nvmf_tgt_poll_group_000", 00:16:59.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.067 "listen_address": { 00:16:59.067 "trtype": "TCP", 00:16:59.067 "adrfam": "IPv4", 00:16:59.067 "traddr": "10.0.0.2", 00:16:59.067 "trsvcid": "4420" 00:16:59.067 }, 00:16:59.067 "peer_address": { 00:16:59.067 "trtype": "TCP", 00:16:59.067 "adrfam": "IPv4", 00:16:59.067 "traddr": "10.0.0.1", 00:16:59.067 "trsvcid": "41482" 00:16:59.067 }, 00:16:59.067 "auth": { 00:16:59.067 "state": "completed", 00:16:59.067 "digest": "sha256", 00:16:59.067 "dhgroup": "ffdhe4096" 00:16:59.067 } 00:16:59.067 } 00:16:59.067 ]' 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.067 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.325 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.325 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.325 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.325 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.325 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.583 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:16:59.583 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:00.519 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.775 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.032 00:17:01.032 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.032 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.032 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.598 { 00:17:01.598 "cntlid": 31, 00:17:01.598 "qid": 0, 00:17:01.598 "state": "enabled", 00:17:01.598 "thread": "nvmf_tgt_poll_group_000", 00:17:01.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:01.598 "listen_address": { 00:17:01.598 "trtype": "TCP", 00:17:01.598 "adrfam": "IPv4", 00:17:01.598 "traddr": "10.0.0.2", 00:17:01.598 "trsvcid": "4420" 00:17:01.598 }, 00:17:01.598 "peer_address": { 00:17:01.598 "trtype": "TCP", 00:17:01.598 "adrfam": "IPv4", 00:17:01.598 "traddr": "10.0.0.1", 00:17:01.598 "trsvcid": "41518" 00:17:01.598 }, 00:17:01.598 "auth": { 00:17:01.598 "state": "completed", 00:17:01.598 "digest": "sha256", 00:17:01.598 "dhgroup": "ffdhe4096" 00:17:01.598 } 00:17:01.598 } 00:17:01.598 ]' 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.598 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.856 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:01.856 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.792 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.050 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.618 00:17:03.618 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.618 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.618 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.876 { 00:17:03.876 "cntlid": 33, 00:17:03.876 "qid": 0, 00:17:03.876 "state": "enabled", 00:17:03.876 "thread": "nvmf_tgt_poll_group_000", 00:17:03.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.876 "listen_address": { 00:17:03.876 "trtype": "TCP", 00:17:03.876 "adrfam": "IPv4", 00:17:03.876 "traddr": "10.0.0.2", 00:17:03.876 "trsvcid": "4420" 00:17:03.876 }, 00:17:03.876 "peer_address": { 00:17:03.876 "trtype": "TCP", 00:17:03.876 "adrfam": "IPv4", 00:17:03.876 "traddr": "10.0.0.1", 00:17:03.876 "trsvcid": "41562" 00:17:03.876 }, 00:17:03.876 "auth": { 00:17:03.876 "state": "completed", 00:17:03.876 "digest": "sha256", 00:17:03.876 "dhgroup": "ffdhe6144" 00:17:03.876 } 00:17:03.876 } 00:17:03.876 ]' 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.876 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.134 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:04.134 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.086 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.343 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.344 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.913 00:17:05.913 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.913 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.913 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.172 { 00:17:06.172 "cntlid": 35, 00:17:06.172 "qid": 0, 00:17:06.172 "state": "enabled", 00:17:06.172 "thread": "nvmf_tgt_poll_group_000", 00:17:06.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.172 "listen_address": { 00:17:06.172 "trtype": "TCP", 00:17:06.172 "adrfam": "IPv4", 00:17:06.172 "traddr": "10.0.0.2", 00:17:06.172 "trsvcid": "4420" 00:17:06.172 }, 00:17:06.172 "peer_address": { 00:17:06.172 "trtype": "TCP", 00:17:06.172 "adrfam": "IPv4", 00:17:06.172 "traddr": "10.0.0.1", 00:17:06.172 "trsvcid": "47432" 00:17:06.172 }, 00:17:06.172 "auth": { 00:17:06.172 "state": "completed", 00:17:06.172 "digest": "sha256", 00:17:06.172 "dhgroup": "ffdhe6144" 00:17:06.172 } 00:17:06.172 } 00:17:06.172 ]' 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.172 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.431 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.431 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.431 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.689 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:06.689 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.627 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.886 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:07.886 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.887 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.456 00:17:08.456 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.456 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.456 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.456 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.456 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.456 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.456 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.714 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.714 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.714 { 00:17:08.714 "cntlid": 37, 00:17:08.714 "qid": 0, 00:17:08.714 "state": "enabled", 00:17:08.714 "thread": "nvmf_tgt_poll_group_000", 00:17:08.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:08.715 "listen_address": { 00:17:08.715 "trtype": "TCP", 00:17:08.715 "adrfam": "IPv4", 00:17:08.715 "traddr": "10.0.0.2", 00:17:08.715 "trsvcid": "4420" 00:17:08.715 }, 00:17:08.715 "peer_address": { 00:17:08.715 "trtype": "TCP", 00:17:08.715 "adrfam": "IPv4", 00:17:08.715 "traddr": "10.0.0.1", 00:17:08.715 "trsvcid": "47462" 00:17:08.715 }, 00:17:08.715 "auth": { 00:17:08.715 "state": "completed", 00:17:08.715 "digest": "sha256", 00:17:08.715 "dhgroup": "ffdhe6144" 00:17:08.715 } 00:17:08.715 } 00:17:08.715 ]' 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.715 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.977 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:08.978 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:09.989 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.989 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.989 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.989 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.989 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.990 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.990 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.990 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.248 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.818 00:17:10.818 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.818 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.818 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.077 { 00:17:11.077 "cntlid": 39, 00:17:11.077 "qid": 0, 00:17:11.077 "state": "enabled", 00:17:11.077 "thread": "nvmf_tgt_poll_group_000", 00:17:11.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.077 "listen_address": { 00:17:11.077 "trtype": "TCP", 00:17:11.077 "adrfam": "IPv4", 00:17:11.077 "traddr": "10.0.0.2", 00:17:11.077 "trsvcid": "4420" 00:17:11.077 }, 00:17:11.077 "peer_address": { 00:17:11.077 "trtype": "TCP", 00:17:11.077 "adrfam": "IPv4", 00:17:11.077 "traddr": "10.0.0.1", 00:17:11.077 "trsvcid": "47488" 00:17:11.077 }, 00:17:11.077 "auth": { 00:17:11.077 "state": "completed", 00:17:11.077 "digest": "sha256", 00:17:11.077 "dhgroup": "ffdhe6144" 00:17:11.077 } 00:17:11.077 } 00:17:11.077 ]' 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.077 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.338 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:11.338 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.274 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.532 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.470 00:17:13.470 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.470 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.470 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.728 { 00:17:13.728 "cntlid": 41, 00:17:13.728 "qid": 0, 00:17:13.728 "state": "enabled", 00:17:13.728 "thread": "nvmf_tgt_poll_group_000", 00:17:13.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:13.728 "listen_address": { 00:17:13.728 "trtype": "TCP", 00:17:13.728 "adrfam": "IPv4", 00:17:13.728 "traddr": "10.0.0.2", 00:17:13.728 "trsvcid": "4420" 00:17:13.728 }, 00:17:13.728 "peer_address": { 00:17:13.728 "trtype": "TCP", 00:17:13.728 "adrfam": "IPv4", 00:17:13.728 "traddr": "10.0.0.1", 00:17:13.728 "trsvcid": "47502" 00:17:13.728 }, 00:17:13.728 "auth": { 00:17:13.728 "state": "completed", 00:17:13.728 "digest": "sha256", 00:17:13.728 "dhgroup": "ffdhe8192" 00:17:13.728 } 00:17:13.728 } 00:17:13.728 ]' 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.728 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.298 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:14.298 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.233 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.234 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.171 00:17:16.171 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.171 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.172 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.430 { 00:17:16.430 "cntlid": 43, 00:17:16.430 "qid": 0, 00:17:16.430 "state": "enabled", 00:17:16.430 "thread": "nvmf_tgt_poll_group_000", 00:17:16.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:16.430 "listen_address": { 00:17:16.430 "trtype": "TCP", 00:17:16.430 "adrfam": "IPv4", 00:17:16.430 "traddr": "10.0.0.2", 00:17:16.430 "trsvcid": "4420" 00:17:16.430 }, 00:17:16.430 "peer_address": { 00:17:16.430 "trtype": "TCP", 00:17:16.430 "adrfam": "IPv4", 00:17:16.430 "traddr": "10.0.0.1", 00:17:16.430 "trsvcid": "47600" 00:17:16.430 }, 00:17:16.430 "auth": { 00:17:16.430 "state": "completed", 00:17:16.430 "digest": "sha256", 00:17:16.430 "dhgroup": "ffdhe8192" 00:17:16.430 } 00:17:16.430 } 00:17:16.430 ]' 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.430 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.689 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.689 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.689 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.950 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:16.950 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.896 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.153 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.090 00:17:19.090 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.090 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.090 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.091 { 00:17:19.091 "cntlid": 45, 00:17:19.091 "qid": 0, 00:17:19.091 "state": "enabled", 00:17:19.091 "thread": "nvmf_tgt_poll_group_000", 00:17:19.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:19.091 "listen_address": { 00:17:19.091 "trtype": "TCP", 00:17:19.091 "adrfam": "IPv4", 00:17:19.091 "traddr": "10.0.0.2", 00:17:19.091 "trsvcid": "4420" 00:17:19.091 }, 00:17:19.091 "peer_address": { 00:17:19.091 "trtype": "TCP", 00:17:19.091 "adrfam": "IPv4", 00:17:19.091 "traddr": "10.0.0.1", 00:17:19.091 "trsvcid": "47626" 00:17:19.091 }, 00:17:19.091 "auth": { 00:17:19.091 "state": "completed", 00:17:19.091 "digest": "sha256", 00:17:19.091 "dhgroup": "ffdhe8192" 00:17:19.091 } 00:17:19.091 } 00:17:19.091 ]' 00:17:19.091 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.347 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.347 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.348 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.348 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.348 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.348 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.348 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.605 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:19.605 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:20.538 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.538 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.538 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.538 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.538 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.538 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.538 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.538 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.797 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.735 00:17:21.735 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.735 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.735 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.994 { 00:17:21.994 "cntlid": 47, 00:17:21.994 "qid": 0, 00:17:21.994 "state": "enabled", 00:17:21.994 "thread": "nvmf_tgt_poll_group_000", 00:17:21.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.994 "listen_address": { 00:17:21.994 "trtype": "TCP", 00:17:21.994 "adrfam": "IPv4", 00:17:21.994 "traddr": "10.0.0.2", 00:17:21.994 "trsvcid": "4420" 00:17:21.994 }, 00:17:21.994 "peer_address": { 00:17:21.994 "trtype": "TCP", 00:17:21.994 "adrfam": "IPv4", 00:17:21.994 "traddr": "10.0.0.1", 00:17:21.994 "trsvcid": "47650" 00:17:21.994 }, 00:17:21.994 "auth": { 00:17:21.994 "state": "completed", 00:17:21.994 "digest": "sha256", 00:17:21.994 "dhgroup": "ffdhe8192" 00:17:21.994 } 00:17:21.994 } 00:17:21.994 ]' 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.994 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.252 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:22.252 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.190 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.450 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.450 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.450 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.450 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.450 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.017 00:17:24.017 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.017 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.017 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.275 { 00:17:24.275 "cntlid": 49, 00:17:24.275 "qid": 0, 00:17:24.275 "state": "enabled", 00:17:24.275 "thread": "nvmf_tgt_poll_group_000", 00:17:24.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:24.275 "listen_address": { 00:17:24.275 "trtype": "TCP", 00:17:24.275 "adrfam": "IPv4", 00:17:24.275 "traddr": "10.0.0.2", 00:17:24.275 "trsvcid": "4420" 00:17:24.275 }, 00:17:24.275 "peer_address": { 00:17:24.275 "trtype": "TCP", 00:17:24.275 "adrfam": "IPv4", 00:17:24.275 "traddr": "10.0.0.1", 00:17:24.275 "trsvcid": "47686" 00:17:24.275 }, 00:17:24.275 "auth": { 00:17:24.275 "state": "completed", 00:17:24.275 "digest": "sha384", 00:17:24.275 "dhgroup": "null" 00:17:24.275 } 00:17:24.275 } 00:17:24.275 ]' 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.275 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.841 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:24.841 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.777 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.778 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.778 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.778 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.778 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.346 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.346 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.605 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.605 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.605 { 00:17:26.605 "cntlid": 51, 00:17:26.605 "qid": 0, 00:17:26.605 "state": "enabled", 00:17:26.605 "thread": "nvmf_tgt_poll_group_000", 00:17:26.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:26.605 "listen_address": { 00:17:26.605 "trtype": "TCP", 00:17:26.605 "adrfam": "IPv4", 00:17:26.605 "traddr": "10.0.0.2", 00:17:26.605 "trsvcid": "4420" 00:17:26.605 }, 00:17:26.605 "peer_address": { 00:17:26.605 "trtype": "TCP", 00:17:26.605 "adrfam": "IPv4", 00:17:26.605 "traddr": "10.0.0.1", 00:17:26.605 "trsvcid": "53310" 00:17:26.605 }, 00:17:26.605 "auth": { 00:17:26.605 "state": "completed", 00:17:26.605 "digest": "sha384", 00:17:26.605 "dhgroup": "null" 00:17:26.605 } 00:17:26.605 } 00:17:26.605 ]' 00:17:26.605 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.605 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.605 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.605 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.605 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.605 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.605 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.605 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.863 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:26.863 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.797 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.055 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.312 00:17:28.312 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.312 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.312 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.570 { 00:17:28.570 "cntlid": 53, 00:17:28.570 "qid": 0, 00:17:28.570 "state": "enabled", 00:17:28.570 "thread": "nvmf_tgt_poll_group_000", 00:17:28.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.570 "listen_address": { 00:17:28.570 "trtype": "TCP", 00:17:28.570 "adrfam": "IPv4", 00:17:28.570 "traddr": "10.0.0.2", 00:17:28.570 "trsvcid": "4420" 00:17:28.570 }, 00:17:28.570 "peer_address": { 00:17:28.570 "trtype": "TCP", 00:17:28.570 "adrfam": "IPv4", 00:17:28.570 "traddr": "10.0.0.1", 00:17:28.570 "trsvcid": "53330" 00:17:28.570 }, 00:17:28.570 "auth": { 00:17:28.570 "state": "completed", 00:17:28.570 "digest": "sha384", 00:17:28.570 "dhgroup": "null" 00:17:28.570 } 00:17:28.570 } 00:17:28.570 ]' 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.570 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.828 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.828 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.828 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.828 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.828 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.086 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:29.086 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.021 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.280 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.539 00:17:30.539 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.539 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.539 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.798 { 00:17:30.798 "cntlid": 55, 00:17:30.798 "qid": 0, 00:17:30.798 "state": "enabled", 00:17:30.798 "thread": "nvmf_tgt_poll_group_000", 00:17:30.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:30.798 "listen_address": { 00:17:30.798 "trtype": "TCP", 00:17:30.798 "adrfam": "IPv4", 00:17:30.798 "traddr": "10.0.0.2", 00:17:30.798 "trsvcid": "4420" 00:17:30.798 }, 00:17:30.798 "peer_address": { 00:17:30.798 "trtype": "TCP", 00:17:30.798 "adrfam": "IPv4", 00:17:30.798 "traddr": "10.0.0.1", 00:17:30.798 "trsvcid": "53350" 00:17:30.798 }, 00:17:30.798 "auth": { 00:17:30.798 "state": "completed", 00:17:30.798 "digest": "sha384", 00:17:30.798 "dhgroup": "null" 00:17:30.798 } 00:17:30.798 } 00:17:30.798 ]' 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.798 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.056 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.056 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.056 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.056 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.056 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.314 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:31.314 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.250 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.530 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.860 00:17:32.860 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.860 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.860 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.118 { 00:17:33.118 "cntlid": 57, 00:17:33.118 "qid": 0, 00:17:33.118 "state": "enabled", 00:17:33.118 "thread": "nvmf_tgt_poll_group_000", 00:17:33.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:33.118 "listen_address": { 00:17:33.118 "trtype": "TCP", 00:17:33.118 "adrfam": "IPv4", 00:17:33.118 "traddr": "10.0.0.2", 00:17:33.118 "trsvcid": "4420" 00:17:33.118 }, 00:17:33.118 "peer_address": { 00:17:33.118 "trtype": "TCP", 00:17:33.118 "adrfam": "IPv4", 00:17:33.118 "traddr": "10.0.0.1", 00:17:33.118 "trsvcid": "53364" 00:17:33.118 }, 00:17:33.118 "auth": { 00:17:33.118 "state": "completed", 00:17:33.118 "digest": "sha384", 00:17:33.118 "dhgroup": "ffdhe2048" 00:17:33.118 } 00:17:33.118 } 00:17:33.118 ]' 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.118 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.377 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:33.377 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.309 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.566 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.824 00:17:35.081 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.081 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.081 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.339 { 00:17:35.339 "cntlid": 59, 00:17:35.339 "qid": 0, 00:17:35.339 "state": "enabled", 00:17:35.339 "thread": "nvmf_tgt_poll_group_000", 00:17:35.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.339 "listen_address": { 00:17:35.339 "trtype": "TCP", 00:17:35.339 "adrfam": "IPv4", 00:17:35.339 "traddr": "10.0.0.2", 00:17:35.339 "trsvcid": "4420" 00:17:35.339 }, 00:17:35.339 "peer_address": { 00:17:35.339 "trtype": "TCP", 00:17:35.339 "adrfam": "IPv4", 00:17:35.339 "traddr": "10.0.0.1", 00:17:35.339 "trsvcid": "37540" 00:17:35.339 }, 00:17:35.339 "auth": { 00:17:35.339 "state": "completed", 00:17:35.339 "digest": "sha384", 00:17:35.339 "dhgroup": "ffdhe2048" 00:17:35.339 } 00:17:35.339 } 00:17:35.339 ]' 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.339 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.596 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:35.596 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.531 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.790 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.791 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.791 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.791 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.049 00:17:37.049 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.049 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.049 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.308 { 00:17:37.308 "cntlid": 61, 00:17:37.308 "qid": 0, 00:17:37.308 "state": "enabled", 00:17:37.308 "thread": "nvmf_tgt_poll_group_000", 00:17:37.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:37.308 "listen_address": { 00:17:37.308 "trtype": "TCP", 00:17:37.308 "adrfam": "IPv4", 00:17:37.308 "traddr": "10.0.0.2", 00:17:37.308 "trsvcid": "4420" 00:17:37.308 }, 00:17:37.308 "peer_address": { 00:17:37.308 "trtype": "TCP", 00:17:37.308 "adrfam": "IPv4", 00:17:37.308 "traddr": "10.0.0.1", 00:17:37.308 "trsvcid": "37566" 00:17:37.308 }, 00:17:37.308 "auth": { 00:17:37.308 "state": "completed", 00:17:37.308 "digest": "sha384", 00:17:37.308 "dhgroup": "ffdhe2048" 00:17:37.308 } 00:17:37.308 } 00:17:37.308 ]' 00:17:37.308 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.567 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.825 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:37.825 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.779 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.780 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.780 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.038 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.296 00:17:39.297 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.297 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.297 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.564 { 00:17:39.564 "cntlid": 63, 00:17:39.564 "qid": 0, 00:17:39.564 "state": "enabled", 00:17:39.564 "thread": "nvmf_tgt_poll_group_000", 00:17:39.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:39.564 "listen_address": { 00:17:39.564 "trtype": "TCP", 00:17:39.564 "adrfam": "IPv4", 00:17:39.564 "traddr": "10.0.0.2", 00:17:39.564 "trsvcid": "4420" 00:17:39.564 }, 00:17:39.564 "peer_address": { 00:17:39.564 "trtype": "TCP", 00:17:39.564 "adrfam": "IPv4", 00:17:39.564 "traddr": "10.0.0.1", 00:17:39.564 "trsvcid": "37596" 00:17:39.564 }, 00:17:39.564 "auth": { 00:17:39.564 "state": "completed", 00:17:39.564 "digest": "sha384", 00:17:39.564 "dhgroup": "ffdhe2048" 00:17:39.564 } 00:17:39.564 } 00:17:39.564 ]' 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.564 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.826 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.826 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.826 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.826 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.826 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.083 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:40.083 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:41.018 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.019 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.276 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.534 00:17:41.535 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.535 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.535 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.793 { 00:17:41.793 "cntlid": 65, 00:17:41.793 "qid": 0, 00:17:41.793 "state": "enabled", 00:17:41.793 "thread": "nvmf_tgt_poll_group_000", 00:17:41.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:41.793 "listen_address": { 00:17:41.793 "trtype": "TCP", 00:17:41.793 "adrfam": "IPv4", 00:17:41.793 "traddr": "10.0.0.2", 00:17:41.793 "trsvcid": "4420" 00:17:41.793 }, 00:17:41.793 "peer_address": { 00:17:41.793 "trtype": "TCP", 00:17:41.793 "adrfam": "IPv4", 00:17:41.793 "traddr": "10.0.0.1", 00:17:41.793 "trsvcid": "37628" 00:17:41.793 }, 00:17:41.793 "auth": { 00:17:41.793 "state": "completed", 00:17:41.793 "digest": "sha384", 00:17:41.793 "dhgroup": "ffdhe3072" 00:17:41.793 } 00:17:41.793 } 00:17:41.793 ]' 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.793 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.050 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.051 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.051 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.051 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.051 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.308 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:42.308 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.503 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.760 00:17:43.760 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.761 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.761 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.018 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.018 { 00:17:44.018 "cntlid": 67, 00:17:44.018 "qid": 0, 00:17:44.018 "state": "enabled", 00:17:44.018 "thread": "nvmf_tgt_poll_group_000", 00:17:44.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:44.019 "listen_address": { 00:17:44.019 "trtype": "TCP", 00:17:44.019 "adrfam": "IPv4", 00:17:44.019 "traddr": "10.0.0.2", 00:17:44.019 "trsvcid": "4420" 00:17:44.019 }, 00:17:44.019 "peer_address": { 00:17:44.019 "trtype": "TCP", 00:17:44.019 "adrfam": "IPv4", 00:17:44.019 "traddr": "10.0.0.1", 00:17:44.019 "trsvcid": "37666" 00:17:44.019 }, 00:17:44.019 "auth": { 00:17:44.019 "state": "completed", 00:17:44.019 "digest": "sha384", 00:17:44.019 "dhgroup": "ffdhe3072" 00:17:44.019 } 00:17:44.019 } 00:17:44.019 ]' 00:17:44.019 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.019 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.019 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.277 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.277 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.277 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.277 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.277 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.534 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:44.535 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.471 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.729 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.730 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.989 00:17:45.989 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.989 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.989 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.247 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.247 { 00:17:46.247 "cntlid": 69, 00:17:46.247 "qid": 0, 00:17:46.247 "state": "enabled", 00:17:46.247 "thread": "nvmf_tgt_poll_group_000", 00:17:46.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:46.247 "listen_address": { 00:17:46.247 "trtype": "TCP", 00:17:46.247 "adrfam": "IPv4", 00:17:46.247 "traddr": "10.0.0.2", 00:17:46.247 "trsvcid": "4420" 00:17:46.247 }, 00:17:46.247 "peer_address": { 00:17:46.247 "trtype": "TCP", 00:17:46.247 "adrfam": "IPv4", 00:17:46.247 "traddr": "10.0.0.1", 00:17:46.247 "trsvcid": "42650" 00:17:46.247 }, 00:17:46.247 "auth": { 00:17:46.247 "state": "completed", 00:17:46.247 "digest": "sha384", 00:17:46.247 "dhgroup": "ffdhe3072" 00:17:46.247 } 00:17:46.247 } 00:17:46.247 ]' 00:17:46.248 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.248 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.248 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.248 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.248 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.506 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.506 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.506 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.766 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:46.766 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:47.701 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.701 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.959 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.217 00:17:48.217 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.217 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.217 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.475 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.475 { 00:17:48.475 "cntlid": 71, 00:17:48.475 "qid": 0, 00:17:48.475 "state": "enabled", 00:17:48.475 "thread": "nvmf_tgt_poll_group_000", 00:17:48.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:48.475 "listen_address": { 00:17:48.475 "trtype": "TCP", 00:17:48.475 "adrfam": "IPv4", 00:17:48.475 "traddr": "10.0.0.2", 00:17:48.475 "trsvcid": "4420" 00:17:48.475 }, 00:17:48.476 "peer_address": { 00:17:48.476 "trtype": "TCP", 00:17:48.476 "adrfam": "IPv4", 00:17:48.476 "traddr": "10.0.0.1", 00:17:48.476 "trsvcid": "42690" 00:17:48.476 }, 00:17:48.476 "auth": { 00:17:48.476 "state": "completed", 00:17:48.476 "digest": "sha384", 00:17:48.476 "dhgroup": "ffdhe3072" 00:17:48.476 } 00:17:48.476 } 00:17:48.476 ]' 00:17:48.476 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.476 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.476 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.476 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.476 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.733 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.733 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.733 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.993 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:48.993 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.927 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.185 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.443 00:17:50.443 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.443 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.443 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.701 { 00:17:50.701 "cntlid": 73, 00:17:50.701 "qid": 0, 00:17:50.701 "state": "enabled", 00:17:50.701 "thread": "nvmf_tgt_poll_group_000", 00:17:50.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:50.701 "listen_address": { 00:17:50.701 "trtype": "TCP", 00:17:50.701 "adrfam": "IPv4", 00:17:50.701 "traddr": "10.0.0.2", 00:17:50.701 "trsvcid": "4420" 00:17:50.701 }, 00:17:50.701 "peer_address": { 00:17:50.701 "trtype": "TCP", 00:17:50.701 "adrfam": "IPv4", 00:17:50.701 "traddr": "10.0.0.1", 00:17:50.701 "trsvcid": "42716" 00:17:50.701 }, 00:17:50.701 "auth": { 00:17:50.701 "state": "completed", 00:17:50.701 "digest": "sha384", 00:17:50.701 "dhgroup": "ffdhe4096" 00:17:50.701 } 00:17:50.701 } 00:17:50.701 ]' 00:17:50.701 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.958 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.215 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:51.215 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.147 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.404 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.405 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.970 00:17:52.970 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.970 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.970 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.228 { 00:17:53.228 "cntlid": 75, 00:17:53.228 "qid": 0, 00:17:53.228 "state": "enabled", 00:17:53.228 "thread": "nvmf_tgt_poll_group_000", 00:17:53.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:53.228 "listen_address": { 00:17:53.228 "trtype": "TCP", 00:17:53.228 "adrfam": "IPv4", 00:17:53.228 "traddr": "10.0.0.2", 00:17:53.228 "trsvcid": "4420" 00:17:53.228 }, 00:17:53.228 "peer_address": { 00:17:53.228 "trtype": "TCP", 00:17:53.228 "adrfam": "IPv4", 00:17:53.228 "traddr": "10.0.0.1", 00:17:53.228 "trsvcid": "42738" 00:17:53.228 }, 00:17:53.228 "auth": { 00:17:53.228 "state": "completed", 00:17:53.228 "digest": "sha384", 00:17:53.228 "dhgroup": "ffdhe4096" 00:17:53.228 } 00:17:53.228 } 00:17:53.228 ]' 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.228 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.795 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:53.795 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.361 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.622 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.880 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.880 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.880 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.880 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.137 00:17:55.138 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.138 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.138 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.395 { 00:17:55.395 "cntlid": 77, 00:17:55.395 "qid": 0, 00:17:55.395 "state": "enabled", 00:17:55.395 "thread": "nvmf_tgt_poll_group_000", 00:17:55.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:55.395 "listen_address": { 00:17:55.395 "trtype": "TCP", 00:17:55.395 "adrfam": "IPv4", 00:17:55.395 "traddr": "10.0.0.2", 00:17:55.395 "trsvcid": "4420" 00:17:55.395 }, 00:17:55.395 "peer_address": { 00:17:55.395 "trtype": "TCP", 00:17:55.395 "adrfam": "IPv4", 00:17:55.395 "traddr": "10.0.0.1", 00:17:55.395 "trsvcid": "58632" 00:17:55.395 }, 00:17:55.395 "auth": { 00:17:55.395 "state": "completed", 00:17:55.395 "digest": "sha384", 00:17:55.395 "dhgroup": "ffdhe4096" 00:17:55.395 } 00:17:55.395 } 00:17:55.395 ]' 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.395 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.652 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.652 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.652 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.652 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.652 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.911 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:55.911 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:17:56.844 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.844 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.844 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.845 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.845 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.845 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.845 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.845 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.103 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.361 00:17:57.361 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.361 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.361 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.619 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.619 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.619 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.619 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.905 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.905 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.905 { 00:17:57.905 "cntlid": 79, 00:17:57.905 "qid": 0, 00:17:57.905 "state": "enabled", 00:17:57.905 "thread": "nvmf_tgt_poll_group_000", 00:17:57.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:57.905 "listen_address": { 00:17:57.905 "trtype": "TCP", 00:17:57.905 "adrfam": "IPv4", 00:17:57.905 "traddr": "10.0.0.2", 00:17:57.905 "trsvcid": "4420" 00:17:57.905 }, 00:17:57.905 "peer_address": { 00:17:57.905 "trtype": "TCP", 00:17:57.905 "adrfam": "IPv4", 00:17:57.905 "traddr": "10.0.0.1", 00:17:57.905 "trsvcid": "58652" 00:17:57.905 }, 00:17:57.905 "auth": { 00:17:57.905 "state": "completed", 00:17:57.905 "digest": "sha384", 00:17:57.905 "dhgroup": "ffdhe4096" 00:17:57.905 } 00:17:57.905 } 00:17:57.905 ]' 00:17:57.905 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.905 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.906 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.188 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:58.188 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.122 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.380 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.945 00:17:59.945 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.946 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.946 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.204 { 00:18:00.204 "cntlid": 81, 00:18:00.204 "qid": 0, 00:18:00.204 "state": "enabled", 00:18:00.204 "thread": "nvmf_tgt_poll_group_000", 00:18:00.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:00.204 "listen_address": { 00:18:00.204 "trtype": "TCP", 00:18:00.204 "adrfam": "IPv4", 00:18:00.204 "traddr": "10.0.0.2", 00:18:00.204 "trsvcid": "4420" 00:18:00.204 }, 00:18:00.204 "peer_address": { 00:18:00.204 "trtype": "TCP", 00:18:00.204 "adrfam": "IPv4", 00:18:00.204 "traddr": "10.0.0.1", 00:18:00.204 "trsvcid": "58680" 00:18:00.204 }, 00:18:00.204 "auth": { 00:18:00.204 "state": "completed", 00:18:00.204 "digest": "sha384", 00:18:00.204 "dhgroup": "ffdhe6144" 00:18:00.204 } 00:18:00.204 } 00:18:00.204 ]' 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.204 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.463 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:00.463 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.398 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.657 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.231 00:18:02.231 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.231 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.231 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.494 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.494 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.494 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.494 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.752 { 00:18:02.752 "cntlid": 83, 00:18:02.752 "qid": 0, 00:18:02.752 "state": "enabled", 00:18:02.752 "thread": "nvmf_tgt_poll_group_000", 00:18:02.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:02.752 "listen_address": { 00:18:02.752 "trtype": "TCP", 00:18:02.752 "adrfam": "IPv4", 00:18:02.752 "traddr": "10.0.0.2", 00:18:02.752 "trsvcid": "4420" 00:18:02.752 }, 00:18:02.752 "peer_address": { 00:18:02.752 "trtype": "TCP", 00:18:02.752 "adrfam": "IPv4", 00:18:02.752 "traddr": "10.0.0.1", 00:18:02.752 "trsvcid": "58698" 00:18:02.752 }, 00:18:02.752 "auth": { 00:18:02.752 "state": "completed", 00:18:02.752 "digest": "sha384", 00:18:02.752 "dhgroup": "ffdhe6144" 00:18:02.752 } 00:18:02.752 } 00:18:02.752 ]' 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.752 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.011 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:03.011 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.943 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.199 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:04.199 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.199 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.199 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.200 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.762 00:18:04.762 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.762 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.762 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.018 { 00:18:05.018 "cntlid": 85, 00:18:05.018 "qid": 0, 00:18:05.018 "state": "enabled", 00:18:05.018 "thread": "nvmf_tgt_poll_group_000", 00:18:05.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:05.018 "listen_address": { 00:18:05.018 "trtype": "TCP", 00:18:05.018 "adrfam": "IPv4", 00:18:05.018 "traddr": "10.0.0.2", 00:18:05.018 "trsvcid": "4420" 00:18:05.018 }, 00:18:05.018 "peer_address": { 00:18:05.018 "trtype": "TCP", 00:18:05.018 "adrfam": "IPv4", 00:18:05.018 "traddr": "10.0.0.1", 00:18:05.018 "trsvcid": "58726" 00:18:05.018 }, 00:18:05.018 "auth": { 00:18:05.018 "state": "completed", 00:18:05.018 "digest": "sha384", 00:18:05.018 "dhgroup": "ffdhe6144" 00:18:05.018 } 00:18:05.018 } 00:18:05.018 ]' 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.018 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.582 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:05.582 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.513 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.513 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:06.513 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.514 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.077 00:18:07.077 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.077 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.077 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.334 { 00:18:07.334 "cntlid": 87, 00:18:07.334 "qid": 0, 00:18:07.334 "state": "enabled", 00:18:07.334 "thread": "nvmf_tgt_poll_group_000", 00:18:07.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:07.334 "listen_address": { 00:18:07.334 "trtype": "TCP", 00:18:07.334 "adrfam": "IPv4", 00:18:07.334 "traddr": "10.0.0.2", 00:18:07.334 "trsvcid": "4420" 00:18:07.334 }, 00:18:07.334 "peer_address": { 00:18:07.334 "trtype": "TCP", 00:18:07.334 "adrfam": "IPv4", 00:18:07.334 "traddr": "10.0.0.1", 00:18:07.334 "trsvcid": "42846" 00:18:07.334 }, 00:18:07.334 "auth": { 00:18:07.334 "state": "completed", 00:18:07.334 "digest": "sha384", 00:18:07.334 "dhgroup": "ffdhe6144" 00:18:07.334 } 00:18:07.334 } 00:18:07.334 ]' 00:18:07.334 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.592 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.849 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:07.849 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.782 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.040 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.974 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.974 { 00:18:09.974 "cntlid": 89, 00:18:09.974 "qid": 0, 00:18:09.974 "state": "enabled", 00:18:09.974 "thread": "nvmf_tgt_poll_group_000", 00:18:09.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:09.974 "listen_address": { 00:18:09.974 "trtype": "TCP", 00:18:09.974 "adrfam": "IPv4", 00:18:09.974 "traddr": "10.0.0.2", 00:18:09.974 "trsvcid": "4420" 00:18:09.974 }, 00:18:09.974 "peer_address": { 00:18:09.974 "trtype": "TCP", 00:18:09.974 "adrfam": "IPv4", 00:18:09.974 "traddr": "10.0.0.1", 00:18:09.974 "trsvcid": "42884" 00:18:09.974 }, 00:18:09.974 "auth": { 00:18:09.974 "state": "completed", 00:18:09.974 "digest": "sha384", 00:18:09.974 "dhgroup": "ffdhe8192" 00:18:09.974 } 00:18:09.974 } 00:18:09.974 ]' 00:18:09.974 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.230 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.230 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.230 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.230 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.230 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.231 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.231 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.488 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:10.488 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.420 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.678 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.611 00:18:12.611 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.611 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.611 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.869 { 00:18:12.869 "cntlid": 91, 00:18:12.869 "qid": 0, 00:18:12.869 "state": "enabled", 00:18:12.869 "thread": "nvmf_tgt_poll_group_000", 00:18:12.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:12.869 "listen_address": { 00:18:12.869 "trtype": "TCP", 00:18:12.869 "adrfam": "IPv4", 00:18:12.869 "traddr": "10.0.0.2", 00:18:12.869 "trsvcid": "4420" 00:18:12.869 }, 00:18:12.869 "peer_address": { 00:18:12.869 "trtype": "TCP", 00:18:12.869 "adrfam": "IPv4", 00:18:12.869 "traddr": "10.0.0.1", 00:18:12.869 "trsvcid": "42916" 00:18:12.869 }, 00:18:12.869 "auth": { 00:18:12.869 "state": "completed", 00:18:12.869 "digest": "sha384", 00:18:12.869 "dhgroup": "ffdhe8192" 00:18:12.869 } 00:18:12.869 } 00:18:12.869 ]' 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.869 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.127 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:13.127 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.061 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.320 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.252 00:18:15.252 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.252 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.252 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.509 { 00:18:15.509 "cntlid": 93, 00:18:15.509 "qid": 0, 00:18:15.509 "state": "enabled", 00:18:15.509 "thread": "nvmf_tgt_poll_group_000", 00:18:15.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:15.509 "listen_address": { 00:18:15.509 "trtype": "TCP", 00:18:15.509 "adrfam": "IPv4", 00:18:15.509 "traddr": "10.0.0.2", 00:18:15.509 "trsvcid": "4420" 00:18:15.509 }, 00:18:15.509 "peer_address": { 00:18:15.509 "trtype": "TCP", 00:18:15.509 "adrfam": "IPv4", 00:18:15.509 "traddr": "10.0.0.1", 00:18:15.509 "trsvcid": "41794" 00:18:15.509 }, 00:18:15.509 "auth": { 00:18:15.509 "state": "completed", 00:18:15.509 "digest": "sha384", 00:18:15.509 "dhgroup": "ffdhe8192" 00:18:15.509 } 00:18:15.509 } 00:18:15.509 ]' 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.509 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.509 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.509 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.509 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.509 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.509 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.766 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:15.766 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:16.696 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.954 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.212 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.148 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.148 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.148 { 00:18:18.148 "cntlid": 95, 00:18:18.148 "qid": 0, 00:18:18.148 "state": "enabled", 00:18:18.148 "thread": "nvmf_tgt_poll_group_000", 00:18:18.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:18.148 "listen_address": { 00:18:18.148 "trtype": "TCP", 00:18:18.148 "adrfam": "IPv4", 00:18:18.148 "traddr": "10.0.0.2", 00:18:18.148 "trsvcid": "4420" 00:18:18.148 }, 00:18:18.148 "peer_address": { 00:18:18.148 "trtype": "TCP", 00:18:18.148 "adrfam": "IPv4", 00:18:18.148 "traddr": "10.0.0.1", 00:18:18.148 "trsvcid": "41820" 00:18:18.148 }, 00:18:18.148 "auth": { 00:18:18.148 "state": "completed", 00:18:18.148 "digest": "sha384", 00:18:18.148 "dhgroup": "ffdhe8192" 00:18:18.148 } 00:18:18.148 } 00:18:18.148 ]' 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.406 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.664 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:18.664 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:19.597 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.597 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.597 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.597 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.597 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.598 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:19.598 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.598 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.598 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.598 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.856 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.114 00:18:20.114 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.114 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.114 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.372 { 00:18:20.372 "cntlid": 97, 00:18:20.372 "qid": 0, 00:18:20.372 "state": "enabled", 00:18:20.372 "thread": "nvmf_tgt_poll_group_000", 00:18:20.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:20.372 "listen_address": { 00:18:20.372 "trtype": "TCP", 00:18:20.372 "adrfam": "IPv4", 00:18:20.372 "traddr": "10.0.0.2", 00:18:20.372 "trsvcid": "4420" 00:18:20.372 }, 00:18:20.372 "peer_address": { 00:18:20.372 "trtype": "TCP", 00:18:20.372 "adrfam": "IPv4", 00:18:20.372 "traddr": "10.0.0.1", 00:18:20.372 "trsvcid": "41840" 00:18:20.372 }, 00:18:20.372 "auth": { 00:18:20.372 "state": "completed", 00:18:20.372 "digest": "sha512", 00:18:20.372 "dhgroup": "null" 00:18:20.372 } 00:18:20.372 } 00:18:20.372 ]' 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.372 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.631 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.631 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.631 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.631 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.631 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.888 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:20.888 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.822 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.079 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.080 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.337 00:18:22.337 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.337 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.337 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.595 { 00:18:22.595 "cntlid": 99, 00:18:22.595 "qid": 0, 00:18:22.595 "state": "enabled", 00:18:22.595 "thread": "nvmf_tgt_poll_group_000", 00:18:22.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:22.595 "listen_address": { 00:18:22.595 "trtype": "TCP", 00:18:22.595 "adrfam": "IPv4", 00:18:22.595 "traddr": "10.0.0.2", 00:18:22.595 "trsvcid": "4420" 00:18:22.595 }, 00:18:22.595 "peer_address": { 00:18:22.595 "trtype": "TCP", 00:18:22.595 "adrfam": "IPv4", 00:18:22.595 "traddr": "10.0.0.1", 00:18:22.595 "trsvcid": "41870" 00:18:22.595 }, 00:18:22.595 "auth": { 00:18:22.595 "state": "completed", 00:18:22.595 "digest": "sha512", 00:18:22.595 "dhgroup": "null" 00:18:22.595 } 00:18:22.595 } 00:18:22.595 ]' 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:22.595 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.941 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.941 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.941 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.217 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:23.217 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.151 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.716 00:18:24.716 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.716 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.716 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.974 { 00:18:24.974 "cntlid": 101, 00:18:24.974 "qid": 0, 00:18:24.974 "state": "enabled", 00:18:24.974 "thread": "nvmf_tgt_poll_group_000", 00:18:24.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:24.974 "listen_address": { 00:18:24.974 "trtype": "TCP", 00:18:24.974 "adrfam": "IPv4", 00:18:24.974 "traddr": "10.0.0.2", 00:18:24.974 "trsvcid": "4420" 00:18:24.974 }, 00:18:24.974 "peer_address": { 00:18:24.974 "trtype": "TCP", 00:18:24.974 "adrfam": "IPv4", 00:18:24.974 "traddr": "10.0.0.1", 00:18:24.974 "trsvcid": "41890" 00:18:24.974 }, 00:18:24.974 "auth": { 00:18:24.974 "state": "completed", 00:18:24.974 "digest": "sha512", 00:18:24.974 "dhgroup": "null" 00:18:24.974 } 00:18:24.974 } 00:18:24.974 ]' 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.974 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.232 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:25.232 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.166 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.424 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.991 00:18:26.991 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.991 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.991 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.249 { 00:18:27.249 "cntlid": 103, 00:18:27.249 "qid": 0, 00:18:27.249 "state": "enabled", 00:18:27.249 "thread": "nvmf_tgt_poll_group_000", 00:18:27.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:27.249 "listen_address": { 00:18:27.249 "trtype": "TCP", 00:18:27.249 "adrfam": "IPv4", 00:18:27.249 "traddr": "10.0.0.2", 00:18:27.249 "trsvcid": "4420" 00:18:27.249 }, 00:18:27.249 "peer_address": { 00:18:27.249 "trtype": "TCP", 00:18:27.249 "adrfam": "IPv4", 00:18:27.249 "traddr": "10.0.0.1", 00:18:27.249 "trsvcid": "45024" 00:18:27.249 }, 00:18:27.249 "auth": { 00:18:27.249 "state": "completed", 00:18:27.249 "digest": "sha512", 00:18:27.249 "dhgroup": "null" 00:18:27.249 } 00:18:27.249 } 00:18:27.249 ]' 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.249 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.531 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:27.531 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.466 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.724 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.981 00:18:28.981 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.982 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.982 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.240 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.240 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.240 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.240 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.498 { 00:18:29.498 "cntlid": 105, 00:18:29.498 "qid": 0, 00:18:29.498 "state": "enabled", 00:18:29.498 "thread": "nvmf_tgt_poll_group_000", 00:18:29.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:29.498 "listen_address": { 00:18:29.498 "trtype": "TCP", 00:18:29.498 "adrfam": "IPv4", 00:18:29.498 "traddr": "10.0.0.2", 00:18:29.498 "trsvcid": "4420" 00:18:29.498 }, 00:18:29.498 "peer_address": { 00:18:29.498 "trtype": "TCP", 00:18:29.498 "adrfam": "IPv4", 00:18:29.498 "traddr": "10.0.0.1", 00:18:29.498 "trsvcid": "45042" 00:18:29.498 }, 00:18:29.498 "auth": { 00:18:29.498 "state": "completed", 00:18:29.498 "digest": "sha512", 00:18:29.498 "dhgroup": "ffdhe2048" 00:18:29.498 } 00:18:29.498 } 00:18:29.498 ]' 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.498 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.756 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:29.756 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.690 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.949 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.514 00:18:31.514 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.514 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.514 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.772 { 00:18:31.772 "cntlid": 107, 00:18:31.772 "qid": 0, 00:18:31.772 "state": "enabled", 00:18:31.772 "thread": "nvmf_tgt_poll_group_000", 00:18:31.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:31.772 "listen_address": { 00:18:31.772 "trtype": "TCP", 00:18:31.772 "adrfam": "IPv4", 00:18:31.772 "traddr": "10.0.0.2", 00:18:31.772 "trsvcid": "4420" 00:18:31.772 }, 00:18:31.772 "peer_address": { 00:18:31.772 "trtype": "TCP", 00:18:31.772 "adrfam": "IPv4", 00:18:31.772 "traddr": "10.0.0.1", 00:18:31.772 "trsvcid": "45072" 00:18:31.772 }, 00:18:31.772 "auth": { 00:18:31.772 "state": "completed", 00:18:31.772 "digest": "sha512", 00:18:31.772 "dhgroup": "ffdhe2048" 00:18:31.772 } 00:18:31.772 } 00:18:31.772 ]' 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.772 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.030 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:32.030 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.963 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.221 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.479 00:18:33.737 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.737 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.737 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.994 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.994 { 00:18:33.994 "cntlid": 109, 00:18:33.994 "qid": 0, 00:18:33.994 "state": "enabled", 00:18:33.994 "thread": "nvmf_tgt_poll_group_000", 00:18:33.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:33.994 "listen_address": { 00:18:33.994 "trtype": "TCP", 00:18:33.994 "adrfam": "IPv4", 00:18:33.995 "traddr": "10.0.0.2", 00:18:33.995 "trsvcid": "4420" 00:18:33.995 }, 00:18:33.995 "peer_address": { 00:18:33.995 "trtype": "TCP", 00:18:33.995 "adrfam": "IPv4", 00:18:33.995 "traddr": "10.0.0.1", 00:18:33.995 "trsvcid": "45092" 00:18:33.995 }, 00:18:33.995 "auth": { 00:18:33.995 "state": "completed", 00:18:33.995 "digest": "sha512", 00:18:33.995 "dhgroup": "ffdhe2048" 00:18:33.995 } 00:18:33.995 } 00:18:33.995 ]' 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.995 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.252 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:34.252 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.185 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.010 00:18:36.010 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.010 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.010 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.268 { 00:18:36.268 "cntlid": 111, 00:18:36.268 "qid": 0, 00:18:36.268 "state": "enabled", 00:18:36.268 "thread": "nvmf_tgt_poll_group_000", 00:18:36.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:36.268 "listen_address": { 00:18:36.268 "trtype": "TCP", 00:18:36.268 "adrfam": "IPv4", 00:18:36.268 "traddr": "10.0.0.2", 00:18:36.268 "trsvcid": "4420" 00:18:36.268 }, 00:18:36.268 "peer_address": { 00:18:36.268 "trtype": "TCP", 00:18:36.268 "adrfam": "IPv4", 00:18:36.268 "traddr": "10.0.0.1", 00:18:36.268 "trsvcid": "35664" 00:18:36.268 }, 00:18:36.268 "auth": { 00:18:36.268 "state": "completed", 00:18:36.268 "digest": "sha512", 00:18:36.268 "dhgroup": "ffdhe2048" 00:18:36.268 } 00:18:36.268 } 00:18:36.268 ]' 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.268 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.526 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:36.526 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.460 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.719 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.977 00:18:38.235 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.235 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.235 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.493 { 00:18:38.493 "cntlid": 113, 00:18:38.493 "qid": 0, 00:18:38.493 "state": "enabled", 00:18:38.493 "thread": "nvmf_tgt_poll_group_000", 00:18:38.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:38.493 "listen_address": { 00:18:38.493 "trtype": "TCP", 00:18:38.493 "adrfam": "IPv4", 00:18:38.493 "traddr": "10.0.0.2", 00:18:38.493 "trsvcid": "4420" 00:18:38.493 }, 00:18:38.493 "peer_address": { 00:18:38.493 "trtype": "TCP", 00:18:38.493 "adrfam": "IPv4", 00:18:38.493 "traddr": "10.0.0.1", 00:18:38.493 "trsvcid": "35706" 00:18:38.493 }, 00:18:38.493 "auth": { 00:18:38.493 "state": "completed", 00:18:38.493 "digest": "sha512", 00:18:38.493 "dhgroup": "ffdhe3072" 00:18:38.493 } 00:18:38.493 } 00:18:38.493 ]' 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.493 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.751 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:38.751 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.684 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.685 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.942 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.508 00:18:40.508 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.508 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.509 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.767 { 00:18:40.767 "cntlid": 115, 00:18:40.767 "qid": 0, 00:18:40.767 "state": "enabled", 00:18:40.767 "thread": "nvmf_tgt_poll_group_000", 00:18:40.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:40.767 "listen_address": { 00:18:40.767 "trtype": "TCP", 00:18:40.767 "adrfam": "IPv4", 00:18:40.767 "traddr": "10.0.0.2", 00:18:40.767 "trsvcid": "4420" 00:18:40.767 }, 00:18:40.767 "peer_address": { 00:18:40.767 "trtype": "TCP", 00:18:40.767 "adrfam": "IPv4", 00:18:40.767 "traddr": "10.0.0.1", 00:18:40.767 "trsvcid": "35730" 00:18:40.767 }, 00:18:40.767 "auth": { 00:18:40.767 "state": "completed", 00:18:40.767 "digest": "sha512", 00:18:40.767 "dhgroup": "ffdhe3072" 00:18:40.767 } 00:18:40.767 } 00:18:40.767 ]' 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.767 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.024 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:41.024 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.956 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.213 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.778 00:18:42.778 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.778 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.778 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.035 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.035 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.035 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.035 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.036 { 00:18:43.036 "cntlid": 117, 00:18:43.036 "qid": 0, 00:18:43.036 "state": "enabled", 00:18:43.036 "thread": "nvmf_tgt_poll_group_000", 00:18:43.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:43.036 "listen_address": { 00:18:43.036 "trtype": "TCP", 00:18:43.036 "adrfam": "IPv4", 00:18:43.036 "traddr": "10.0.0.2", 00:18:43.036 "trsvcid": "4420" 00:18:43.036 }, 00:18:43.036 "peer_address": { 00:18:43.036 "trtype": "TCP", 00:18:43.036 "adrfam": "IPv4", 00:18:43.036 "traddr": "10.0.0.1", 00:18:43.036 "trsvcid": "35756" 00:18:43.036 }, 00:18:43.036 "auth": { 00:18:43.036 "state": "completed", 00:18:43.036 "digest": "sha512", 00:18:43.036 "dhgroup": "ffdhe3072" 00:18:43.036 } 00:18:43.036 } 00:18:43.036 ]' 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.036 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.601 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:43.601 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.532 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.532 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.533 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.533 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.095 00:18:45.095 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.095 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.095 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.355 { 00:18:45.355 "cntlid": 119, 00:18:45.355 "qid": 0, 00:18:45.355 "state": "enabled", 00:18:45.355 "thread": "nvmf_tgt_poll_group_000", 00:18:45.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:45.355 "listen_address": { 00:18:45.355 "trtype": "TCP", 00:18:45.355 "adrfam": "IPv4", 00:18:45.355 "traddr": "10.0.0.2", 00:18:45.355 "trsvcid": "4420" 00:18:45.355 }, 00:18:45.355 "peer_address": { 00:18:45.355 "trtype": "TCP", 00:18:45.355 "adrfam": "IPv4", 00:18:45.355 "traddr": "10.0.0.1", 00:18:45.355 "trsvcid": "40558" 00:18:45.355 }, 00:18:45.355 "auth": { 00:18:45.355 "state": "completed", 00:18:45.355 "digest": "sha512", 00:18:45.355 "dhgroup": "ffdhe3072" 00:18:45.355 } 00:18:45.355 } 00:18:45.355 ]' 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.355 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.613 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:45.613 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.547 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.805 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.371 00:18:47.371 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.371 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.371 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.629 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.629 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.629 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.629 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.629 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.629 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.629 { 00:18:47.629 "cntlid": 121, 00:18:47.629 "qid": 0, 00:18:47.629 "state": "enabled", 00:18:47.629 "thread": "nvmf_tgt_poll_group_000", 00:18:47.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:47.629 "listen_address": { 00:18:47.629 "trtype": "TCP", 00:18:47.629 "adrfam": "IPv4", 00:18:47.629 "traddr": "10.0.0.2", 00:18:47.629 "trsvcid": "4420" 00:18:47.629 }, 00:18:47.629 "peer_address": { 00:18:47.629 "trtype": "TCP", 00:18:47.629 "adrfam": "IPv4", 00:18:47.629 "traddr": "10.0.0.1", 00:18:47.629 "trsvcid": "40580" 00:18:47.629 }, 00:18:47.629 "auth": { 00:18:47.629 "state": "completed", 00:18:47.629 "digest": "sha512", 00:18:47.629 "dhgroup": "ffdhe4096" 00:18:47.629 } 00:18:47.629 } 00:18:47.629 ]' 00:18:47.629 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.630 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.910 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:47.910 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.941 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.198 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.456 00:18:49.456 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.456 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.456 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.714 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.714 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.714 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.714 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.972 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.972 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.972 { 00:18:49.972 "cntlid": 123, 00:18:49.972 "qid": 0, 00:18:49.973 "state": "enabled", 00:18:49.973 "thread": "nvmf_tgt_poll_group_000", 00:18:49.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:49.973 "listen_address": { 00:18:49.973 "trtype": "TCP", 00:18:49.973 "adrfam": "IPv4", 00:18:49.973 "traddr": "10.0.0.2", 00:18:49.973 "trsvcid": "4420" 00:18:49.973 }, 00:18:49.973 "peer_address": { 00:18:49.973 "trtype": "TCP", 00:18:49.973 "adrfam": "IPv4", 00:18:49.973 "traddr": "10.0.0.1", 00:18:49.973 "trsvcid": "40604" 00:18:49.973 }, 00:18:49.973 "auth": { 00:18:49.973 "state": "completed", 00:18:49.973 "digest": "sha512", 00:18:49.973 "dhgroup": "ffdhe4096" 00:18:49.973 } 00:18:49.973 } 00:18:49.973 ]' 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.973 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.231 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:50.231 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.165 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.424 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.682 00:18:51.940 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.940 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.940 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.198 { 00:18:52.198 "cntlid": 125, 00:18:52.198 "qid": 0, 00:18:52.198 "state": "enabled", 00:18:52.198 "thread": "nvmf_tgt_poll_group_000", 00:18:52.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:52.198 "listen_address": { 00:18:52.198 "trtype": "TCP", 00:18:52.198 "adrfam": "IPv4", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "trsvcid": "4420" 00:18:52.198 }, 00:18:52.198 "peer_address": { 00:18:52.198 "trtype": "TCP", 00:18:52.198 "adrfam": "IPv4", 00:18:52.198 "traddr": "10.0.0.1", 00:18:52.198 "trsvcid": "40622" 00:18:52.198 }, 00:18:52.198 "auth": { 00:18:52.198 "state": "completed", 00:18:52.198 "digest": "sha512", 00:18:52.198 "dhgroup": "ffdhe4096" 00:18:52.198 } 00:18:52.198 } 00:18:52.198 ]' 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.198 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.199 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.199 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.199 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.457 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:52.457 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:53.389 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.646 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.211 00:18:54.211 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.211 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.211 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.470 { 00:18:54.470 "cntlid": 127, 00:18:54.470 "qid": 0, 00:18:54.470 "state": "enabled", 00:18:54.470 "thread": "nvmf_tgt_poll_group_000", 00:18:54.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:54.470 "listen_address": { 00:18:54.470 "trtype": "TCP", 00:18:54.470 "adrfam": "IPv4", 00:18:54.470 "traddr": "10.0.0.2", 00:18:54.470 "trsvcid": "4420" 00:18:54.470 }, 00:18:54.470 "peer_address": { 00:18:54.470 "trtype": "TCP", 00:18:54.470 "adrfam": "IPv4", 00:18:54.470 "traddr": "10.0.0.1", 00:18:54.470 "trsvcid": "40656" 00:18:54.470 }, 00:18:54.470 "auth": { 00:18:54.470 "state": "completed", 00:18:54.470 "digest": "sha512", 00:18:54.470 "dhgroup": "ffdhe4096" 00:18:54.470 } 00:18:54.470 } 00:18:54.470 ]' 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.470 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.727 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:54.728 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:18:55.658 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.658 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.659 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.917 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.498 00:18:56.498 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.498 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.498 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.756 { 00:18:56.756 "cntlid": 129, 00:18:56.756 "qid": 0, 00:18:56.756 "state": "enabled", 00:18:56.756 "thread": "nvmf_tgt_poll_group_000", 00:18:56.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:56.756 "listen_address": { 00:18:56.756 "trtype": "TCP", 00:18:56.756 "adrfam": "IPv4", 00:18:56.756 "traddr": "10.0.0.2", 00:18:56.756 "trsvcid": "4420" 00:18:56.756 }, 00:18:56.756 "peer_address": { 00:18:56.756 "trtype": "TCP", 00:18:56.756 "adrfam": "IPv4", 00:18:56.756 "traddr": "10.0.0.1", 00:18:56.756 "trsvcid": "45522" 00:18:56.756 }, 00:18:56.756 "auth": { 00:18:56.756 "state": "completed", 00:18:56.756 "digest": "sha512", 00:18:56.756 "dhgroup": "ffdhe6144" 00:18:56.756 } 00:18:56.756 } 00:18:56.756 ]' 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.756 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.323 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:57.323 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:18:57.888 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.888 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.888 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.888 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.148 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.148 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.148 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.148 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.406 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.974 00:18:58.974 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.974 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.974 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.231 { 00:18:59.231 "cntlid": 131, 00:18:59.231 "qid": 0, 00:18:59.231 "state": "enabled", 00:18:59.231 "thread": "nvmf_tgt_poll_group_000", 00:18:59.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.231 "listen_address": { 00:18:59.231 "trtype": "TCP", 00:18:59.231 "adrfam": "IPv4", 00:18:59.231 "traddr": "10.0.0.2", 00:18:59.231 "trsvcid": "4420" 00:18:59.231 }, 00:18:59.231 "peer_address": { 00:18:59.231 "trtype": "TCP", 00:18:59.231 "adrfam": "IPv4", 00:18:59.231 "traddr": "10.0.0.1", 00:18:59.231 "trsvcid": "45544" 00:18:59.231 }, 00:18:59.231 "auth": { 00:18:59.231 "state": "completed", 00:18:59.231 "digest": "sha512", 00:18:59.231 "dhgroup": "ffdhe6144" 00:18:59.231 } 00:18:59.231 } 00:18:59.231 ]' 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.231 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.489 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:18:59.489 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.425 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.684 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.253 00:19:01.253 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.253 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.253 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.512 { 00:19:01.512 "cntlid": 133, 00:19:01.512 "qid": 0, 00:19:01.512 "state": "enabled", 00:19:01.512 "thread": "nvmf_tgt_poll_group_000", 00:19:01.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.512 "listen_address": { 00:19:01.512 "trtype": "TCP", 00:19:01.512 "adrfam": "IPv4", 00:19:01.512 "traddr": "10.0.0.2", 00:19:01.512 "trsvcid": "4420" 00:19:01.512 }, 00:19:01.512 "peer_address": { 00:19:01.512 "trtype": "TCP", 00:19:01.512 "adrfam": "IPv4", 00:19:01.512 "traddr": "10.0.0.1", 00:19:01.512 "trsvcid": "45568" 00:19:01.512 }, 00:19:01.512 "auth": { 00:19:01.512 "state": "completed", 00:19:01.512 "digest": "sha512", 00:19:01.512 "dhgroup": "ffdhe6144" 00:19:01.512 } 00:19:01.512 } 00:19:01.512 ]' 00:19:01.512 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.512 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.512 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.512 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.512 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.771 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.771 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.771 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.029 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:19:02.029 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.966 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.226 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.226 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.226 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.226 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.796 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.796 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.055 { 00:19:04.055 "cntlid": 135, 00:19:04.055 "qid": 0, 00:19:04.055 "state": "enabled", 00:19:04.055 "thread": "nvmf_tgt_poll_group_000", 00:19:04.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.055 "listen_address": { 00:19:04.055 "trtype": "TCP", 00:19:04.055 "adrfam": "IPv4", 00:19:04.055 "traddr": "10.0.0.2", 00:19:04.055 "trsvcid": "4420" 00:19:04.055 }, 00:19:04.055 "peer_address": { 00:19:04.055 "trtype": "TCP", 00:19:04.055 "adrfam": "IPv4", 00:19:04.055 "traddr": "10.0.0.1", 00:19:04.055 "trsvcid": "45598" 00:19:04.055 }, 00:19:04.055 "auth": { 00:19:04.055 "state": "completed", 00:19:04.055 "digest": "sha512", 00:19:04.055 "dhgroup": "ffdhe6144" 00:19:04.055 } 00:19:04.055 } 00:19:04.055 ]' 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.055 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.314 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:04.314 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.261 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.519 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.456 00:19:06.456 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.456 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.456 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.714 { 00:19:06.714 "cntlid": 137, 00:19:06.714 "qid": 0, 00:19:06.714 "state": "enabled", 00:19:06.714 "thread": "nvmf_tgt_poll_group_000", 00:19:06.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.714 "listen_address": { 00:19:06.714 "trtype": "TCP", 00:19:06.714 "adrfam": "IPv4", 00:19:06.714 "traddr": "10.0.0.2", 00:19:06.714 "trsvcid": "4420" 00:19:06.714 }, 00:19:06.714 "peer_address": { 00:19:06.714 "trtype": "TCP", 00:19:06.714 "adrfam": "IPv4", 00:19:06.714 "traddr": "10.0.0.1", 00:19:06.714 "trsvcid": "46960" 00:19:06.714 }, 00:19:06.714 "auth": { 00:19:06.714 "state": "completed", 00:19:06.714 "digest": "sha512", 00:19:06.714 "dhgroup": "ffdhe8192" 00:19:06.714 } 00:19:06.714 } 00:19:06.714 ]' 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.714 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.972 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.972 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.972 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.232 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:19:07.232 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.169 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.429 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.429 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.429 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.429 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.999 00:19:09.259 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.259 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.259 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.518 { 00:19:09.518 "cntlid": 139, 00:19:09.518 "qid": 0, 00:19:09.518 "state": "enabled", 00:19:09.518 "thread": "nvmf_tgt_poll_group_000", 00:19:09.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.518 "listen_address": { 00:19:09.518 "trtype": "TCP", 00:19:09.518 "adrfam": "IPv4", 00:19:09.518 "traddr": "10.0.0.2", 00:19:09.518 "trsvcid": "4420" 00:19:09.518 }, 00:19:09.518 "peer_address": { 00:19:09.518 "trtype": "TCP", 00:19:09.518 "adrfam": "IPv4", 00:19:09.518 "traddr": "10.0.0.1", 00:19:09.518 "trsvcid": "47004" 00:19:09.518 }, 00:19:09.518 "auth": { 00:19:09.518 "state": "completed", 00:19:09.518 "digest": "sha512", 00:19:09.518 "dhgroup": "ffdhe8192" 00:19:09.518 } 00:19:09.518 } 00:19:09.518 ]' 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.518 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.519 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.777 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:19:09.777 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: --dhchap-ctrl-secret DHHC-1:02:ZjRiNzE1MzliNjFjMzRjNGFhNjNiYzYwYWI4ZGRjZDZhMTNiYjE5ODIxMDJlYzEwtgPk+g==: 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.716 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.975 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.909 00:19:11.909 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.909 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.909 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.168 { 00:19:12.168 "cntlid": 141, 00:19:12.168 "qid": 0, 00:19:12.168 "state": "enabled", 00:19:12.168 "thread": "nvmf_tgt_poll_group_000", 00:19:12.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:12.168 "listen_address": { 00:19:12.168 "trtype": "TCP", 00:19:12.168 "adrfam": "IPv4", 00:19:12.168 "traddr": "10.0.0.2", 00:19:12.168 "trsvcid": "4420" 00:19:12.168 }, 00:19:12.168 "peer_address": { 00:19:12.168 "trtype": "TCP", 00:19:12.168 "adrfam": "IPv4", 00:19:12.168 "traddr": "10.0.0.1", 00:19:12.168 "trsvcid": "47024" 00:19:12.168 }, 00:19:12.168 "auth": { 00:19:12.168 "state": "completed", 00:19:12.168 "digest": "sha512", 00:19:12.168 "dhgroup": "ffdhe8192" 00:19:12.168 } 00:19:12.168 } 00:19:12.168 ]' 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.168 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.169 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.169 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.751 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:19:12.751 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:01:MjhiMDliNTZkMmZhY2VmY2QyZTcxZDU4MTk2ZDI0MTLKRLB9: 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.807 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.745 00:19:14.745 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.745 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.745 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.003 { 00:19:15.003 "cntlid": 143, 00:19:15.003 "qid": 0, 00:19:15.003 "state": "enabled", 00:19:15.003 "thread": "nvmf_tgt_poll_group_000", 00:19:15.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.003 "listen_address": { 00:19:15.003 "trtype": "TCP", 00:19:15.003 "adrfam": "IPv4", 00:19:15.003 "traddr": "10.0.0.2", 00:19:15.003 "trsvcid": "4420" 00:19:15.003 }, 00:19:15.003 "peer_address": { 00:19:15.003 "trtype": "TCP", 00:19:15.003 "adrfam": "IPv4", 00:19:15.003 "traddr": "10.0.0.1", 00:19:15.003 "trsvcid": "47060" 00:19:15.003 }, 00:19:15.003 "auth": { 00:19:15.003 "state": "completed", 00:19:15.003 "digest": "sha512", 00:19:15.003 "dhgroup": "ffdhe8192" 00:19:15.003 } 00:19:15.003 } 00:19:15.003 ]' 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.003 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.260 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.260 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.260 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.519 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:15.519 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.451 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.709 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.644 00:19:17.644 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.644 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.644 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.902 { 00:19:17.902 "cntlid": 145, 00:19:17.902 "qid": 0, 00:19:17.902 "state": "enabled", 00:19:17.902 "thread": "nvmf_tgt_poll_group_000", 00:19:17.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.902 "listen_address": { 00:19:17.902 "trtype": "TCP", 00:19:17.902 "adrfam": "IPv4", 00:19:17.902 "traddr": "10.0.0.2", 00:19:17.902 "trsvcid": "4420" 00:19:17.902 }, 00:19:17.902 "peer_address": { 00:19:17.902 "trtype": "TCP", 00:19:17.902 "adrfam": "IPv4", 00:19:17.902 "traddr": "10.0.0.1", 00:19:17.902 "trsvcid": "36488" 00:19:17.902 }, 00:19:17.902 "auth": { 00:19:17.902 "state": "completed", 00:19:17.902 "digest": "sha512", 00:19:17.902 "dhgroup": "ffdhe8192" 00:19:17.902 } 00:19:17.902 } 00:19:17.902 ]' 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.902 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.466 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:19:18.467 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTJjYTg0NTBjZGNhZjMxN2RmM2I0NjI2MjM2ZTEwNjA4ZGNlYmFkOTVmN2Q2ZWRjuTW+LQ==: --dhchap-ctrl-secret DHHC-1:03:MWQwMmQ2MThjMGI2M2FkODFkMGFiY2UwNjk0NjJiNmUwZDQ4MDlmNDlhZDc4MzU0ZjVmZDhlNGRlZWI3OTUwZg5MuHg=: 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:19.399 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:19.970 request: 00:19:19.970 { 00:19:19.970 "name": "nvme0", 00:19:19.970 "trtype": "tcp", 00:19:19.970 "traddr": "10.0.0.2", 00:19:19.970 "adrfam": "ipv4", 00:19:19.970 "trsvcid": "4420", 00:19:19.970 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.970 "prchk_reftag": false, 00:19:19.970 "prchk_guard": false, 00:19:19.970 "hdgst": false, 00:19:19.970 "ddgst": false, 00:19:19.970 "dhchap_key": "key2", 00:19:19.970 "allow_unrecognized_csi": false, 00:19:19.970 "method": "bdev_nvme_attach_controller", 00:19:19.970 "req_id": 1 00:19:19.970 } 00:19:19.970 Got JSON-RPC error response 00:19:19.970 response: 00:19:19.970 { 00:19:19.970 "code": -5, 00:19:19.970 "message": "Input/output error" 00:19:19.970 } 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.970 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:20.908 request: 00:19:20.909 { 00:19:20.909 "name": "nvme0", 00:19:20.909 "trtype": "tcp", 00:19:20.909 "traddr": "10.0.0.2", 00:19:20.909 "adrfam": "ipv4", 00:19:20.909 "trsvcid": "4420", 00:19:20.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.909 "prchk_reftag": false, 00:19:20.909 "prchk_guard": false, 00:19:20.909 "hdgst": false, 00:19:20.909 "ddgst": false, 00:19:20.909 "dhchap_key": "key1", 00:19:20.909 "dhchap_ctrlr_key": "ckey2", 00:19:20.909 "allow_unrecognized_csi": false, 00:19:20.909 "method": "bdev_nvme_attach_controller", 00:19:20.909 "req_id": 1 00:19:20.909 } 00:19:20.909 Got JSON-RPC error response 00:19:20.909 response: 00:19:20.909 { 00:19:20.909 "code": -5, 00:19:20.909 "message": "Input/output error" 00:19:20.909 } 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.909 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.849 request: 00:19:21.849 { 00:19:21.849 "name": "nvme0", 00:19:21.849 "trtype": "tcp", 00:19:21.849 "traddr": "10.0.0.2", 00:19:21.849 "adrfam": "ipv4", 00:19:21.849 "trsvcid": "4420", 00:19:21.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:21.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.849 "prchk_reftag": false, 00:19:21.849 "prchk_guard": false, 00:19:21.849 "hdgst": false, 00:19:21.849 "ddgst": false, 00:19:21.849 "dhchap_key": "key1", 00:19:21.849 "dhchap_ctrlr_key": "ckey1", 00:19:21.849 "allow_unrecognized_csi": false, 00:19:21.849 "method": "bdev_nvme_attach_controller", 00:19:21.849 "req_id": 1 00:19:21.849 } 00:19:21.849 Got JSON-RPC error response 00:19:21.849 response: 00:19:21.849 { 00:19:21.849 "code": -5, 00:19:21.849 "message": "Input/output error" 00:19:21.849 } 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1104818 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1104818 ']' 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1104818 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104818 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104818' 00:19:21.849 killing process with pid 1104818 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1104818 00:19:21.849 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1104818 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1127499 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1127499 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1127499 ']' 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.107 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1127499 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1127499 ']' 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.366 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.625 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.625 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 null0 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.syz 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vTc ]] 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vTc 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.FJY 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k13 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k13 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VmU 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.V11 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.V11 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cUJ 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.885 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.266 nvme0n1 00:19:24.266 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.266 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.266 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.524 { 00:19:24.524 "cntlid": 1, 00:19:24.524 "qid": 0, 00:19:24.524 "state": "enabled", 00:19:24.524 "thread": "nvmf_tgt_poll_group_000", 00:19:24.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.524 "listen_address": { 00:19:24.524 "trtype": "TCP", 00:19:24.524 "adrfam": "IPv4", 00:19:24.524 "traddr": "10.0.0.2", 00:19:24.524 "trsvcid": "4420" 00:19:24.524 }, 00:19:24.524 "peer_address": { 00:19:24.524 "trtype": "TCP", 00:19:24.524 "adrfam": "IPv4", 00:19:24.524 "traddr": "10.0.0.1", 00:19:24.524 "trsvcid": "36552" 00:19:24.524 }, 00:19:24.524 "auth": { 00:19:24.524 "state": "completed", 00:19:24.524 "digest": "sha512", 00:19:24.524 "dhgroup": "ffdhe8192" 00:19:24.524 } 00:19:24.524 } 00:19:24.524 ]' 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.524 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.782 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:24.782 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:25.718 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.977 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.543 request: 00:19:26.543 { 00:19:26.543 "name": "nvme0", 00:19:26.543 "trtype": "tcp", 00:19:26.543 "traddr": "10.0.0.2", 00:19:26.543 "adrfam": "ipv4", 00:19:26.543 "trsvcid": "4420", 00:19:26.543 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.543 "prchk_reftag": false, 00:19:26.543 "prchk_guard": false, 00:19:26.543 "hdgst": false, 00:19:26.543 "ddgst": false, 00:19:26.543 "dhchap_key": "key3", 00:19:26.544 "allow_unrecognized_csi": false, 00:19:26.544 "method": "bdev_nvme_attach_controller", 00:19:26.544 "req_id": 1 00:19:26.544 } 00:19:26.544 Got JSON-RPC error response 00:19:26.544 response: 00:19:26.544 { 00:19:26.544 "code": -5, 00:19:26.544 "message": "Input/output error" 00:19:26.544 } 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:26.544 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.802 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.060 request: 00:19:27.060 { 00:19:27.060 "name": "nvme0", 00:19:27.060 "trtype": "tcp", 00:19:27.060 "traddr": "10.0.0.2", 00:19:27.060 "adrfam": "ipv4", 00:19:27.060 "trsvcid": "4420", 00:19:27.060 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.060 "prchk_reftag": false, 00:19:27.060 "prchk_guard": false, 00:19:27.060 "hdgst": false, 00:19:27.060 "ddgst": false, 00:19:27.060 "dhchap_key": "key3", 00:19:27.060 "allow_unrecognized_csi": false, 00:19:27.060 "method": "bdev_nvme_attach_controller", 00:19:27.060 "req_id": 1 00:19:27.060 } 00:19:27.060 Got JSON-RPC error response 00:19:27.060 response: 00:19:27.060 { 00:19:27.060 "code": -5, 00:19:27.060 "message": "Input/output error" 00:19:27.060 } 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.060 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.319 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.320 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.320 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.888 request: 00:19:27.888 { 00:19:27.888 "name": "nvme0", 00:19:27.888 "trtype": "tcp", 00:19:27.888 "traddr": "10.0.0.2", 00:19:27.888 "adrfam": "ipv4", 00:19:27.888 "trsvcid": "4420", 00:19:27.888 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.888 "prchk_reftag": false, 00:19:27.888 "prchk_guard": false, 00:19:27.888 "hdgst": false, 00:19:27.888 "ddgst": false, 00:19:27.888 "dhchap_key": "key0", 00:19:27.888 "dhchap_ctrlr_key": "key1", 00:19:27.888 "allow_unrecognized_csi": false, 00:19:27.888 "method": "bdev_nvme_attach_controller", 00:19:27.888 "req_id": 1 00:19:27.888 } 00:19:27.888 Got JSON-RPC error response 00:19:27.888 response: 00:19:27.888 { 00:19:27.888 "code": -5, 00:19:27.888 "message": "Input/output error" 00:19:27.888 } 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:27.888 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:28.147 nvme0n1 00:19:28.147 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:28.147 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:28.147 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.406 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.406 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.406 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:28.665 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.045 nvme0n1 00:19:30.045 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:30.045 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.045 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:30.313 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.572 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.572 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:30.572 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: --dhchap-ctrl-secret DHHC-1:03:MWZhODBjMzI3N2U4MzMzYjMxYTNlMmI0NjEyYzRmMTc5ZDhkZjI1OTY1ZmY0NjA4NTBkMzU0ZTQyNDc5ZWVkZlgODXs=: 00:19:31.507 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.508 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.765 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.766 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:32.706 request: 00:19:32.706 { 00:19:32.706 "name": "nvme0", 00:19:32.706 "trtype": "tcp", 00:19:32.706 "traddr": "10.0.0.2", 00:19:32.706 "adrfam": "ipv4", 00:19:32.706 "trsvcid": "4420", 00:19:32.706 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:32.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.706 "prchk_reftag": false, 00:19:32.706 "prchk_guard": false, 00:19:32.706 "hdgst": false, 00:19:32.706 "ddgst": false, 00:19:32.706 "dhchap_key": "key1", 00:19:32.706 "allow_unrecognized_csi": false, 00:19:32.706 "method": "bdev_nvme_attach_controller", 00:19:32.706 "req_id": 1 00:19:32.706 } 00:19:32.706 Got JSON-RPC error response 00:19:32.706 response: 00:19:32.706 { 00:19:32.706 "code": -5, 00:19:32.706 "message": "Input/output error" 00:19:32.706 } 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.706 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.085 nvme0n1 00:19:34.085 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:34.085 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:34.085 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.343 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.343 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.343 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:34.602 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:34.859 nvme0n1 00:19:34.859 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:34.859 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.859 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:35.116 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.116 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.116 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: '' 2s 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: ]] 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTZlODM1NWYxNWJjZWM2ODVhY2VlNWRlYmZjODJhMzUlFSSB: 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:35.375 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: 2s 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: ]] 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmI4NjQwMDVkNTlmZDQ0ODA0MWEyYzMzOGMxNDM3ZmJhYTZjNTlhYzVmN2M2NDIzDkfgAg==: 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:37.905 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:39.815 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.815 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:41.213 nvme0n1 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.213 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.780 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:41.780 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:41.780 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:42.038 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:42.297 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:42.297 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:42.297 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:42.556 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:43.507 request: 00:19:43.507 { 00:19:43.507 "name": "nvme0", 00:19:43.507 "dhchap_key": "key1", 00:19:43.507 "dhchap_ctrlr_key": "key3", 00:19:43.507 "method": "bdev_nvme_set_keys", 00:19:43.507 "req_id": 1 00:19:43.507 } 00:19:43.507 Got JSON-RPC error response 00:19:43.507 response: 00:19:43.507 { 00:19:43.507 "code": -13, 00:19:43.507 "message": "Permission denied" 00:19:43.507 } 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:43.507 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.766 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:43.766 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:44.704 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:44.704 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:44.704 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.961 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:46.338 nvme0n1 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:46.338 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:47.277 request: 00:19:47.277 { 00:19:47.277 "name": "nvme0", 00:19:47.277 "dhchap_key": "key2", 00:19:47.277 "dhchap_ctrlr_key": "key0", 00:19:47.277 "method": "bdev_nvme_set_keys", 00:19:47.277 "req_id": 1 00:19:47.277 } 00:19:47.277 Got JSON-RPC error response 00:19:47.277 response: 00:19:47.277 { 00:19:47.277 "code": -13, 00:19:47.277 "message": "Permission denied" 00:19:47.277 } 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.277 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:47.536 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:47.536 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:48.469 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:48.470 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:48.470 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1104841 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1104841 ']' 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1104841 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104841 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104841' 00:19:48.728 killing process with pid 1104841 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1104841 00:19:48.728 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1104841 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.297 rmmod nvme_tcp 00:19:49.297 rmmod nvme_fabrics 00:19:49.297 rmmod nvme_keyring 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1127499 ']' 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1127499 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1127499 ']' 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1127499 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127499 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127499' 00:19:49.297 killing process with pid 1127499 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1127499 00:19:49.297 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1127499 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.557 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.103 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:52.103 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.syz /tmp/spdk.key-sha256.FJY /tmp/spdk.key-sha384.VmU /tmp/spdk.key-sha512.cUJ /tmp/spdk.key-sha512.vTc /tmp/spdk.key-sha384.k13 /tmp/spdk.key-sha256.V11 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:52.103 00:19:52.103 real 3m30.206s 00:19:52.103 user 8m12.674s 00:19:52.103 sys 0m27.848s 00:19:52.103 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.103 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.104 ************************************ 00:19:52.104 END TEST nvmf_auth_target 00:19:52.104 ************************************ 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.104 ************************************ 00:19:52.104 START TEST nvmf_bdevio_no_huge 00:19:52.104 ************************************ 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:52.104 * Looking for test storage... 00:19:52.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.104 --rc genhtml_branch_coverage=1 00:19:52.104 --rc genhtml_function_coverage=1 00:19:52.104 --rc genhtml_legend=1 00:19:52.104 --rc geninfo_all_blocks=1 00:19:52.104 --rc geninfo_unexecuted_blocks=1 00:19:52.104 00:19:52.104 ' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.104 --rc genhtml_branch_coverage=1 00:19:52.104 --rc genhtml_function_coverage=1 00:19:52.104 --rc genhtml_legend=1 00:19:52.104 --rc geninfo_all_blocks=1 00:19:52.104 --rc geninfo_unexecuted_blocks=1 00:19:52.104 00:19:52.104 ' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.104 --rc genhtml_branch_coverage=1 00:19:52.104 --rc genhtml_function_coverage=1 00:19:52.104 --rc genhtml_legend=1 00:19:52.104 --rc geninfo_all_blocks=1 00:19:52.104 --rc geninfo_unexecuted_blocks=1 00:19:52.104 00:19:52.104 ' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.104 --rc genhtml_branch_coverage=1 00:19:52.104 --rc genhtml_function_coverage=1 00:19:52.104 --rc genhtml_legend=1 00:19:52.104 --rc geninfo_all_blocks=1 00:19:52.104 --rc geninfo_unexecuted_blocks=1 00:19:52.104 00:19:52.104 ' 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.104 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.105 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:54.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:54.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:54.010 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:54.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:54.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:54.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:19:54.011 00:19:54.011 --- 10.0.0.2 ping statistics --- 00:19:54.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.011 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:54.011 00:19:54.011 --- 10.0.0.1 ping statistics --- 00:19:54.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.011 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1132863 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1132863 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1132863 ']' 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.011 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 [2024-12-06 19:18:04.602145] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:54.269 [2024-12-06 19:18:04.602233] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:54.269 [2024-12-06 19:18:04.685070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.269 [2024-12-06 19:18:04.745849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.269 [2024-12-06 19:18:04.745911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.269 [2024-12-06 19:18:04.745939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.269 [2024-12-06 19:18:04.745951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.269 [2024-12-06 19:18:04.745961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.269 [2024-12-06 19:18:04.746980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.269 [2024-12-06 19:18:04.747049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:54.269 [2024-12-06 19:18:04.747121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.269 [2024-12-06 19:18:04.747118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 [2024-12-06 19:18:04.894629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 Malloc0 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 [2024-12-06 19:18:04.932518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:54.527 { 00:19:54.527 "params": { 00:19:54.527 "name": "Nvme$subsystem", 00:19:54.527 "trtype": "$TEST_TRANSPORT", 00:19:54.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.527 "adrfam": "ipv4", 00:19:54.527 "trsvcid": "$NVMF_PORT", 00:19:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.527 "hdgst": ${hdgst:-false}, 00:19:54.527 "ddgst": ${ddgst:-false} 00:19:54.527 }, 00:19:54.527 "method": "bdev_nvme_attach_controller" 00:19:54.527 } 00:19:54.527 EOF 00:19:54.527 )") 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:54.527 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:54.527 "params": { 00:19:54.527 "name": "Nvme1", 00:19:54.527 "trtype": "tcp", 00:19:54.527 "traddr": "10.0.0.2", 00:19:54.527 "adrfam": "ipv4", 00:19:54.527 "trsvcid": "4420", 00:19:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.527 "hdgst": false, 00:19:54.527 "ddgst": false 00:19:54.527 }, 00:19:54.527 "method": "bdev_nvme_attach_controller" 00:19:54.527 }' 00:19:54.527 [2024-12-06 19:18:04.980486] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:54.527 [2024-12-06 19:18:04.980575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1132897 ] 00:19:54.527 [2024-12-06 19:18:05.058838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:54.786 [2024-12-06 19:18:05.124698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.786 [2024-12-06 19:18:05.124749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.786 [2024-12-06 19:18:05.124753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.044 I/O targets: 00:19:55.044 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:55.044 00:19:55.044 00:19:55.044 CUnit - A unit testing framework for C - Version 2.1-3 00:19:55.044 http://cunit.sourceforge.net/ 00:19:55.044 00:19:55.044 00:19:55.044 Suite: bdevio tests on: Nvme1n1 00:19:55.044 Test: blockdev write read block ...passed 00:19:55.044 Test: blockdev write zeroes read block ...passed 00:19:55.044 Test: blockdev write zeroes read no split ...passed 00:19:55.044 Test: blockdev write zeroes read split ...passed 00:19:55.044 Test: blockdev write zeroes read split partial ...passed 00:19:55.044 Test: blockdev reset ...[2024-12-06 19:18:05.597270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:55.044 [2024-12-06 19:18:05.597390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa22b0 (9): Bad file descriptor 00:19:55.303 [2024-12-06 19:18:05.747635] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:55.303 passed 00:19:55.303 Test: blockdev write read 8 blocks ...passed 00:19:55.303 Test: blockdev write read size > 128k ...passed 00:19:55.303 Test: blockdev write read invalid size ...passed 00:19:55.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:55.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:55.303 Test: blockdev write read max offset ...passed 00:19:55.560 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:55.560 Test: blockdev writev readv 8 blocks ...passed 00:19:55.560 Test: blockdev writev readv 30 x 1block ...passed 00:19:55.560 Test: blockdev writev readv block ...passed 00:19:55.560 Test: blockdev writev readv size > 128k ...passed 00:19:55.560 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:55.560 Test: blockdev comparev and writev ...[2024-12-06 19:18:05.962606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.962643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.962684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.962704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:55.560 [2024-12-06 19:18:05.963855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.560 [2024-12-06 19:18:05.963870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:55.561 passed 00:19:55.561 Test: blockdev nvme passthru rw ...passed 00:19:55.561 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:18:06.047914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:55.561 [2024-12-06 19:18:06.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:55.561 [2024-12-06 19:18:06.048079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:55.561 [2024-12-06 19:18:06.048111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:55.561 [2024-12-06 19:18:06.048244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:55.561 [2024-12-06 19:18:06.048266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:55.561 [2024-12-06 19:18:06.048397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:55.561 [2024-12-06 19:18:06.048418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:55.561 passed 00:19:55.561 Test: blockdev nvme admin passthru ...passed 00:19:55.561 Test: blockdev copy ...passed 00:19:55.561 00:19:55.561 Run Summary: Type Total Ran Passed Failed Inactive 00:19:55.561 suites 1 1 n/a 0 0 00:19:55.561 tests 23 23 23 0 0 00:19:55.561 asserts 152 152 152 0 n/a 00:19:55.561 00:19:55.561 Elapsed time = 1.239 seconds 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.126 rmmod nvme_tcp 00:19:56.126 rmmod nvme_fabrics 00:19:56.126 rmmod nvme_keyring 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1132863 ']' 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1132863 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1132863 ']' 00:19:56.126 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1132863 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132863 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132863' 00:19:56.127 killing process with pid 1132863 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1132863 00:19:56.127 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1132863 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.385 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:58.927 00:19:58.927 real 0m6.840s 00:19:58.927 user 0m11.847s 00:19:58.927 sys 0m2.666s 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.927 ************************************ 00:19:58.927 END TEST nvmf_bdevio_no_huge 00:19:58.927 ************************************ 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.927 19:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:58.927 ************************************ 00:19:58.927 START TEST nvmf_tls 00:19:58.927 ************************************ 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:58.927 * Looking for test storage... 00:19:58.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.927 --rc genhtml_branch_coverage=1 00:19:58.927 --rc genhtml_function_coverage=1 00:19:58.927 --rc genhtml_legend=1 00:19:58.927 --rc geninfo_all_blocks=1 00:19:58.927 --rc geninfo_unexecuted_blocks=1 00:19:58.927 00:19:58.927 ' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.927 --rc genhtml_branch_coverage=1 00:19:58.927 --rc genhtml_function_coverage=1 00:19:58.927 --rc genhtml_legend=1 00:19:58.927 --rc geninfo_all_blocks=1 00:19:58.927 --rc geninfo_unexecuted_blocks=1 00:19:58.927 00:19:58.927 ' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.927 --rc genhtml_branch_coverage=1 00:19:58.927 --rc genhtml_function_coverage=1 00:19:58.927 --rc genhtml_legend=1 00:19:58.927 --rc geninfo_all_blocks=1 00:19:58.927 --rc geninfo_unexecuted_blocks=1 00:19:58.927 00:19:58.927 ' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.927 --rc genhtml_branch_coverage=1 00:19:58.927 --rc genhtml_function_coverage=1 00:19:58.927 --rc genhtml_legend=1 00:19:58.927 --rc geninfo_all_blocks=1 00:19:58.927 --rc geninfo_unexecuted_blocks=1 00:19:58.927 00:19:58.927 ' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.927 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:58.928 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:00.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:00.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:00.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:00.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.829 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:00.830 00:20:00.830 --- 10.0.0.2 ping statistics --- 00:20:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.830 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:00.830 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:20:01.088 00:20:01.088 --- 10.0.0.1 ping statistics --- 00:20:01.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.088 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1135607 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1135607 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1135607 ']' 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.088 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.088 [2024-12-06 19:18:11.483745] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:01.088 [2024-12-06 19:18:11.483836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.088 [2024-12-06 19:18:11.555725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.088 [2024-12-06 19:18:11.611593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.088 [2024-12-06 19:18:11.611644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.088 [2024-12-06 19:18:11.611681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.088 [2024-12-06 19:18:11.611694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.088 [2024-12-06 19:18:11.611704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.088 [2024-12-06 19:18:11.612330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:01.346 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:01.604 true 00:20:01.604 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.604 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:01.864 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:01.864 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:01.864 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:02.124 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.124 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:02.384 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:02.384 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:02.384 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:02.648 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.648 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:02.908 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:02.908 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:02.908 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.908 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:03.168 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:03.168 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:03.168 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:03.429 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.429 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:03.689 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:03.689 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:03.689 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:03.949 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.949 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:04.207 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yKy071oyqv 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.BCT46aRHnl 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yKy071oyqv 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.BCT46aRHnl 00:20:04.466 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:04.736 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:05.314 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yKy071oyqv 00:20:05.314 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yKy071oyqv 00:20:05.314 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.572 [2024-12-06 19:18:15.914583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.572 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.831 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:06.089 [2024-12-06 19:18:16.492185] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.089 [2024-12-06 19:18:16.492444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.089 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:06.348 malloc0 00:20:06.348 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.606 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yKy071oyqv 00:20:06.864 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.122 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yKy071oyqv 00:20:19.406 Initializing NVMe Controllers 00:20:19.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:19.406 Initialization complete. Launching workers. 00:20:19.406 ======================================================== 00:20:19.406 Latency(us) 00:20:19.406 Device Information : IOPS MiB/s Average min max 00:20:19.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8732.42 34.11 7331.10 1071.27 9840.26 00:20:19.406 ======================================================== 00:20:19.406 Total : 8732.42 34.11 7331.10 1071.27 9840.26 00:20:19.406 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yKy071oyqv 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yKy071oyqv 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1137625 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1137625 /var/tmp/bdevperf.sock 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1137625 ']' 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.406 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.406 [2024-12-06 19:18:27.829412] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:19.406 [2024-12-06 19:18:27.829488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137625 ] 00:20:19.406 [2024-12-06 19:18:27.900292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.406 [2024-12-06 19:18:27.959563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.407 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.407 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.407 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yKy071oyqv 00:20:19.407 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.407 [2024-12-06 19:18:28.573155] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.407 TLSTESTn1 00:20:19.407 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.407 Running I/O for 10 seconds... 00:20:20.346 3183.00 IOPS, 12.43 MiB/s [2024-12-06T18:18:31.869Z] 3255.00 IOPS, 12.71 MiB/s [2024-12-06T18:18:32.804Z] 3271.00 IOPS, 12.78 MiB/s [2024-12-06T18:18:34.185Z] 3287.75 IOPS, 12.84 MiB/s [2024-12-06T18:18:35.124Z] 3308.20 IOPS, 12.92 MiB/s [2024-12-06T18:18:36.058Z] 3302.67 IOPS, 12.90 MiB/s [2024-12-06T18:18:36.998Z] 3317.00 IOPS, 12.96 MiB/s [2024-12-06T18:18:37.955Z] 3318.38 IOPS, 12.96 MiB/s [2024-12-06T18:18:38.890Z] 3324.22 IOPS, 12.99 MiB/s [2024-12-06T18:18:38.890Z] 3334.30 IOPS, 13.02 MiB/s 00:20:28.313 Latency(us) 00:20:28.313 [2024-12-06T18:18:38.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.313 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:28.313 Verification LBA range: start 0x0 length 0x2000 00:20:28.313 TLSTESTn1 : 10.02 3341.54 13.05 0.00 0.00 38242.68 6310.87 37671.06 00:20:28.313 [2024-12-06T18:18:38.890Z] =================================================================================================================== 00:20:28.313 [2024-12-06T18:18:38.890Z] Total : 3341.54 13.05 0.00 0.00 38242.68 6310.87 37671.06 00:20:28.313 { 00:20:28.313 "results": [ 00:20:28.313 { 00:20:28.313 "job": "TLSTESTn1", 00:20:28.313 "core_mask": "0x4", 00:20:28.313 "workload": "verify", 00:20:28.313 "status": "finished", 00:20:28.313 "verify_range": { 00:20:28.313 "start": 0, 00:20:28.313 "length": 8192 00:20:28.313 }, 00:20:28.313 "queue_depth": 128, 00:20:28.313 "io_size": 4096, 00:20:28.313 "runtime": 10.016638, 00:20:28.313 "iops": 3341.5403451737, 00:20:28.313 "mibps": 13.052891973334766, 00:20:28.313 "io_failed": 0, 00:20:28.313 "io_timeout": 0, 00:20:28.313 "avg_latency_us": 38242.68032226903, 00:20:28.313 "min_latency_us": 6310.874074074074, 00:20:28.313 "max_latency_us": 37671.0637037037 00:20:28.313 } 00:20:28.313 ], 00:20:28.313 "core_count": 1 00:20:28.313 } 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1137625 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1137625 ']' 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1137625 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137625 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:28.313 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137625' 00:20:28.571 killing process with pid 1137625 00:20:28.571 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1137625 00:20:28.571 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.571 00:20:28.571 Latency(us) 00:20:28.571 [2024-12-06T18:18:39.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.571 [2024-12-06T18:18:39.148Z] =================================================================================================================== 00:20:28.571 [2024-12-06T18:18:39.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.571 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1137625 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCT46aRHnl 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCT46aRHnl 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCT46aRHnl 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BCT46aRHnl 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1138863 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1138863 /var/tmp/bdevperf.sock 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1138863 ']' 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.571 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.827 [2024-12-06 19:18:39.160631] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:28.827 [2024-12-06 19:18:39.160749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138863 ] 00:20:28.827 [2024-12-06 19:18:39.229574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.827 [2024-12-06 19:18:39.287925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.827 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.827 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.827 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BCT46aRHnl 00:20:29.392 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.650 [2024-12-06 19:18:39.974025] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.650 [2024-12-06 19:18:39.983793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.650 [2024-12-06 19:18:39.984213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b93f30 (107): Transport endpoint is not connected 00:20:29.650 [2024-12-06 19:18:39.985204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b93f30 (9): Bad file descriptor 00:20:29.650 [2024-12-06 19:18:39.986206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:29.650 [2024-12-06 19:18:39.986234] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.650 [2024-12-06 19:18:39.986265] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:29.650 [2024-12-06 19:18:39.986280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:29.650 request: 00:20:29.650 { 00:20:29.650 "name": "TLSTEST", 00:20:29.650 "trtype": "tcp", 00:20:29.650 "traddr": "10.0.0.2", 00:20:29.650 "adrfam": "ipv4", 00:20:29.650 "trsvcid": "4420", 00:20:29.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.650 "prchk_reftag": false, 00:20:29.650 "prchk_guard": false, 00:20:29.650 "hdgst": false, 00:20:29.650 "ddgst": false, 00:20:29.650 "psk": "key0", 00:20:29.650 "allow_unrecognized_csi": false, 00:20:29.650 "method": "bdev_nvme_attach_controller", 00:20:29.650 "req_id": 1 00:20:29.650 } 00:20:29.650 Got JSON-RPC error response 00:20:29.650 response: 00:20:29.650 { 00:20:29.650 "code": -5, 00:20:29.650 "message": "Input/output error" 00:20:29.650 } 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1138863 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1138863 ']' 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1138863 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1138863 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1138863' 00:20:29.650 killing process with pid 1138863 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1138863 00:20:29.650 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.650 00:20:29.650 Latency(us) 00:20:29.650 [2024-12-06T18:18:40.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.650 [2024-12-06T18:18:40.227Z] =================================================================================================================== 00:20:29.650 [2024-12-06T18:18:40.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.650 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1138863 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yKy071oyqv 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yKy071oyqv 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yKy071oyqv 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.908 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yKy071oyqv 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1139029 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1139029 /var/tmp/bdevperf.sock 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1139029 ']' 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.909 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 [2024-12-06 19:18:40.324260] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:29.909 [2024-12-06 19:18:40.324341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139029 ] 00:20:29.909 [2024-12-06 19:18:40.391724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.909 [2024-12-06 19:18:40.449145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.167 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.167 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.167 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yKy071oyqv 00:20:30.425 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:30.683 [2024-12-06 19:18:41.076767] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.683 [2024-12-06 19:18:41.086447] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:30.683 [2024-12-06 19:18:41.086476] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:30.683 [2024-12-06 19:18:41.086528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:30.683 [2024-12-06 19:18:41.086941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ff30 (107): Transport endpoint is not connected 00:20:30.683 [2024-12-06 19:18:41.087931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ff30 (9): Bad file descriptor 00:20:30.683 [2024-12-06 19:18:41.088930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:30.683 [2024-12-06 19:18:41.088972] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:30.683 [2024-12-06 19:18:41.088986] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:30.683 [2024-12-06 19:18:41.089001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:30.683 request: 00:20:30.683 { 00:20:30.683 "name": "TLSTEST", 00:20:30.683 "trtype": "tcp", 00:20:30.683 "traddr": "10.0.0.2", 00:20:30.683 "adrfam": "ipv4", 00:20:30.683 "trsvcid": "4420", 00:20:30.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.683 "prchk_reftag": false, 00:20:30.683 "prchk_guard": false, 00:20:30.683 "hdgst": false, 00:20:30.683 "ddgst": false, 00:20:30.683 "psk": "key0", 00:20:30.683 "allow_unrecognized_csi": false, 00:20:30.683 "method": "bdev_nvme_attach_controller", 00:20:30.683 "req_id": 1 00:20:30.683 } 00:20:30.683 Got JSON-RPC error response 00:20:30.683 response: 00:20:30.683 { 00:20:30.683 "code": -5, 00:20:30.683 "message": "Input/output error" 00:20:30.683 } 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1139029 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1139029 ']' 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1139029 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139029 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139029' 00:20:30.683 killing process with pid 1139029 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1139029 00:20:30.683 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.683 00:20:30.683 Latency(us) 00:20:30.683 [2024-12-06T18:18:41.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.683 [2024-12-06T18:18:41.260Z] =================================================================================================================== 00:20:30.683 [2024-12-06T18:18:41.260Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.683 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1139029 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yKy071oyqv 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yKy071oyqv 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yKy071oyqv 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yKy071oyqv 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1139130 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1139130 /var/tmp/bdevperf.sock 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1139130 ']' 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.941 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.941 [2024-12-06 19:18:41.388846] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:30.941 [2024-12-06 19:18:41.388926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139130 ] 00:20:30.941 [2024-12-06 19:18:41.457275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.941 [2024-12-06 19:18:41.513364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.199 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.199 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.199 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yKy071oyqv 00:20:31.458 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:31.718 [2024-12-06 19:18:42.142194] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.718 [2024-12-06 19:18:42.149211] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:31.718 [2024-12-06 19:18:42.149239] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:31.718 [2024-12-06 19:18:42.149294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:31.719 [2024-12-06 19:18:42.149455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x580f30 (107): Transport endpoint is not connected 00:20:31.719 [2024-12-06 19:18:42.150444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x580f30 (9): Bad file descriptor 00:20:31.719 [2024-12-06 19:18:42.151444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:31.719 [2024-12-06 19:18:42.151465] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:31.719 [2024-12-06 19:18:42.151494] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:31.719 [2024-12-06 19:18:42.151510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:31.719 request: 00:20:31.719 { 00:20:31.719 "name": "TLSTEST", 00:20:31.719 "trtype": "tcp", 00:20:31.719 "traddr": "10.0.0.2", 00:20:31.719 "adrfam": "ipv4", 00:20:31.719 "trsvcid": "4420", 00:20:31.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.719 "prchk_reftag": false, 00:20:31.719 "prchk_guard": false, 00:20:31.719 "hdgst": false, 00:20:31.719 "ddgst": false, 00:20:31.719 "psk": "key0", 00:20:31.719 "allow_unrecognized_csi": false, 00:20:31.719 "method": "bdev_nvme_attach_controller", 00:20:31.719 "req_id": 1 00:20:31.719 } 00:20:31.719 Got JSON-RPC error response 00:20:31.719 response: 00:20:31.719 { 00:20:31.719 "code": -5, 00:20:31.719 "message": "Input/output error" 00:20:31.719 } 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1139130 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1139130 ']' 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1139130 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139130 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139130' 00:20:31.719 killing process with pid 1139130 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1139130 00:20:31.719 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.719 00:20:31.719 Latency(us) 00:20:31.719 [2024-12-06T18:18:42.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.719 [2024-12-06T18:18:42.296Z] =================================================================================================================== 00:20:31.719 [2024-12-06T18:18:42.296Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.719 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1139130 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1139269 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1139269 /var/tmp/bdevperf.sock 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1139269 ']' 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.979 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.979 [2024-12-06 19:18:42.479206] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:31.979 [2024-12-06 19:18:42.479281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139269 ] 00:20:31.979 [2024-12-06 19:18:42.550199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.237 [2024-12-06 19:18:42.610194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.237 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.237 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.237 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:32.496 [2024-12-06 19:18:42.984072] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:32.496 [2024-12-06 19:18:42.984122] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:32.496 request: 00:20:32.496 { 00:20:32.496 "name": "key0", 00:20:32.496 "path": "", 00:20:32.496 "method": "keyring_file_add_key", 00:20:32.496 "req_id": 1 00:20:32.496 } 00:20:32.496 Got JSON-RPC error response 00:20:32.496 response: 00:20:32.496 { 00:20:32.496 "code": -1, 00:20:32.496 "message": "Operation not permitted" 00:20:32.496 } 00:20:32.496 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.755 [2024-12-06 19:18:43.248905] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.755 [2024-12-06 19:18:43.248979] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:32.755 request: 00:20:32.755 { 00:20:32.755 "name": "TLSTEST", 00:20:32.755 "trtype": "tcp", 00:20:32.755 "traddr": "10.0.0.2", 00:20:32.755 "adrfam": "ipv4", 00:20:32.755 "trsvcid": "4420", 00:20:32.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.755 "prchk_reftag": false, 00:20:32.755 "prchk_guard": false, 00:20:32.755 "hdgst": false, 00:20:32.755 "ddgst": false, 00:20:32.755 "psk": "key0", 00:20:32.755 "allow_unrecognized_csi": false, 00:20:32.755 "method": "bdev_nvme_attach_controller", 00:20:32.755 "req_id": 1 00:20:32.755 } 00:20:32.755 Got JSON-RPC error response 00:20:32.755 response: 00:20:32.755 { 00:20:32.755 "code": -126, 00:20:32.755 "message": "Required key not available" 00:20:32.755 } 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1139269 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1139269 ']' 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1139269 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139269 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139269' 00:20:32.755 killing process with pid 1139269 00:20:32.755 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1139269 00:20:32.755 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.755 00:20:32.755 Latency(us) 00:20:32.755 [2024-12-06T18:18:43.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.755 [2024-12-06T18:18:43.333Z] =================================================================================================================== 00:20:32.756 [2024-12-06T18:18:43.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.756 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1139269 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1135607 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1135607 ']' 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1135607 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1135607 00:20:33.014 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.015 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.015 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1135607' 00:20:33.015 killing process with pid 1135607 00:20:33.015 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1135607 00:20:33.015 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1135607 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zx8QaQ82uL 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zx8QaQ82uL 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.273 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1139540 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1139540 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1139540 ']' 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.533 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.533 [2024-12-06 19:18:43.907538] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:33.533 [2024-12-06 19:18:43.907626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.533 [2024-12-06 19:18:43.986626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.533 [2024-12-06 19:18:44.046463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.533 [2024-12-06 19:18:44.046516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.534 [2024-12-06 19:18:44.046545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.534 [2024-12-06 19:18:44.046557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.534 [2024-12-06 19:18:44.046567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.534 [2024-12-06 19:18:44.047166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zx8QaQ82uL 00:20:33.793 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.077 [2024-12-06 19:18:44.503909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.077 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.594 [2024-12-06 19:18:45.137573] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.594 [2024-12-06 19:18:45.137854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.594 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.853 malloc0 00:20:34.853 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:35.420 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:35.678 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx8QaQ82uL 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zx8QaQ82uL 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1139832 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1139832 /var/tmp/bdevperf.sock 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1139832 ']' 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.937 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.937 [2024-12-06 19:18:46.389712] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:35.937 [2024-12-06 19:18:46.389787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139832 ] 00:20:35.937 [2024-12-06 19:18:46.453922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.937 [2024-12-06 19:18:46.510180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.196 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.196 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.196 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:36.454 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.712 [2024-12-06 19:18:47.138819] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.712 TLSTESTn1 00:20:36.712 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:36.971 Running I/O for 10 seconds... 00:20:38.844 3120.00 IOPS, 12.19 MiB/s [2024-12-06T18:18:50.799Z] 3198.00 IOPS, 12.49 MiB/s [2024-12-06T18:18:51.369Z] 3198.00 IOPS, 12.49 MiB/s [2024-12-06T18:18:52.751Z] 3212.00 IOPS, 12.55 MiB/s [2024-12-06T18:18:53.775Z] 3219.80 IOPS, 12.58 MiB/s [2024-12-06T18:18:54.714Z] 3244.17 IOPS, 12.67 MiB/s [2024-12-06T18:18:55.651Z] 3246.43 IOPS, 12.68 MiB/s [2024-12-06T18:18:56.584Z] 3259.25 IOPS, 12.73 MiB/s [2024-12-06T18:18:57.519Z] 3263.44 IOPS, 12.75 MiB/s [2024-12-06T18:18:57.519Z] 3256.30 IOPS, 12.72 MiB/s 00:20:46.942 Latency(us) 00:20:46.942 [2024-12-06T18:18:57.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.942 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.942 Verification LBA range: start 0x0 length 0x2000 00:20:46.942 TLSTESTn1 : 10.02 3261.77 12.74 0.00 0.00 39167.69 10194.49 38447.79 00:20:46.942 [2024-12-06T18:18:57.519Z] =================================================================================================================== 00:20:46.942 [2024-12-06T18:18:57.519Z] Total : 3261.77 12.74 0.00 0.00 39167.69 10194.49 38447.79 00:20:46.942 { 00:20:46.942 "results": [ 00:20:46.942 { 00:20:46.942 "job": "TLSTESTn1", 00:20:46.942 "core_mask": "0x4", 00:20:46.942 "workload": "verify", 00:20:46.942 "status": "finished", 00:20:46.942 "verify_range": { 00:20:46.942 "start": 0, 00:20:46.942 "length": 8192 00:20:46.942 }, 00:20:46.942 "queue_depth": 128, 00:20:46.942 "io_size": 4096, 00:20:46.942 "runtime": 10.021866, 00:20:46.942 "iops": 3261.767818488094, 00:20:46.942 "mibps": 12.741280540969116, 00:20:46.942 "io_failed": 0, 00:20:46.942 "io_timeout": 0, 00:20:46.942 "avg_latency_us": 39167.6926502176, 00:20:46.942 "min_latency_us": 10194.488888888889, 00:20:46.942 "max_latency_us": 38447.78666666667 00:20:46.942 } 00:20:46.942 ], 00:20:46.942 "core_count": 1 00:20:46.942 } 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1139832 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1139832 ']' 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1139832 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139832 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139832' 00:20:46.942 killing process with pid 1139832 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1139832 00:20:46.942 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.942 00:20:46.942 Latency(us) 00:20:46.942 [2024-12-06T18:18:57.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.942 [2024-12-06T18:18:57.519Z] =================================================================================================================== 00:20:46.942 [2024-12-06T18:18:57.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.942 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1139832 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zx8QaQ82uL 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx8QaQ82uL 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx8QaQ82uL 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx8QaQ82uL 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zx8QaQ82uL 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1141151 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1141151 /var/tmp/bdevperf.sock 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1141151 ']' 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.199 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.199 [2024-12-06 19:18:57.702970] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:47.199 [2024-12-06 19:18:57.703049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141151 ] 00:20:47.199 [2024-12-06 19:18:57.768513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.456 [2024-12-06 19:18:57.828710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:47.713 [2024-12-06 19:18:58.182868] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zx8QaQ82uL': 0100666 00:20:47.713 [2024-12-06 19:18:58.182916] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:47.713 request: 00:20:47.713 { 00:20:47.713 "name": "key0", 00:20:47.713 "path": "/tmp/tmp.zx8QaQ82uL", 00:20:47.713 "method": "keyring_file_add_key", 00:20:47.713 "req_id": 1 00:20:47.713 } 00:20:47.713 Got JSON-RPC error response 00:20:47.713 response: 00:20:47.713 { 00:20:47.713 "code": -1, 00:20:47.713 "message": "Operation not permitted" 00:20:47.713 } 00:20:47.713 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:47.972 [2024-12-06 19:18:58.455730] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.972 [2024-12-06 19:18:58.455797] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:47.972 request: 00:20:47.972 { 00:20:47.972 "name": "TLSTEST", 00:20:47.972 "trtype": "tcp", 00:20:47.972 "traddr": "10.0.0.2", 00:20:47.972 "adrfam": "ipv4", 00:20:47.972 "trsvcid": "4420", 00:20:47.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.972 "prchk_reftag": false, 00:20:47.972 "prchk_guard": false, 00:20:47.972 "hdgst": false, 00:20:47.972 "ddgst": false, 00:20:47.972 "psk": "key0", 00:20:47.972 "allow_unrecognized_csi": false, 00:20:47.972 "method": "bdev_nvme_attach_controller", 00:20:47.972 "req_id": 1 00:20:47.972 } 00:20:47.972 Got JSON-RPC error response 00:20:47.972 response: 00:20:47.972 { 00:20:47.972 "code": -126, 00:20:47.972 "message": "Required key not available" 00:20:47.972 } 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1141151 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1141151 ']' 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1141151 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141151 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141151' 00:20:47.972 killing process with pid 1141151 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1141151 00:20:47.972 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.972 00:20:47.972 Latency(us) 00:20:47.972 [2024-12-06T18:18:58.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.972 [2024-12-06T18:18:58.549Z] =================================================================================================================== 00:20:47.972 [2024-12-06T18:18:58.549Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.972 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1141151 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1139540 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1139540 ']' 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1139540 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139540 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139540' 00:20:48.232 killing process with pid 1139540 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1139540 00:20:48.232 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1139540 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1141306 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1141306 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1141306 ']' 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.490 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.490 [2024-12-06 19:18:59.066723] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:48.490 [2024-12-06 19:18:59.066824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.749 [2024-12-06 19:18:59.137890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.749 [2024-12-06 19:18:59.191432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.749 [2024-12-06 19:18:59.191511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.749 [2024-12-06 19:18:59.191539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.749 [2024-12-06 19:18:59.191550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.749 [2024-12-06 19:18:59.191561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.749 [2024-12-06 19:18:59.192118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zx8QaQ82uL 00:20:48.749 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.007 [2024-12-06 19:18:59.577850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.266 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:49.525 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.785 [2024-12-06 19:19:00.131378] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.785 [2024-12-06 19:19:00.131635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.785 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.044 malloc0 00:20:50.044 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.304 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:50.562 [2024-12-06 19:19:00.985813] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zx8QaQ82uL': 0100666 00:20:50.562 [2024-12-06 19:19:00.985856] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:50.562 request: 00:20:50.562 { 00:20:50.562 "name": "key0", 00:20:50.562 "path": "/tmp/tmp.zx8QaQ82uL", 00:20:50.562 "method": "keyring_file_add_key", 00:20:50.562 "req_id": 1 00:20:50.562 } 00:20:50.562 Got JSON-RPC error response 00:20:50.562 response: 00:20:50.562 { 00:20:50.562 "code": -1, 00:20:50.562 "message": "Operation not permitted" 00:20:50.562 } 00:20:50.562 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.822 [2024-12-06 19:19:01.254554] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:50.822 [2024-12-06 19:19:01.254630] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:50.822 request: 00:20:50.822 { 00:20:50.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.822 "host": "nqn.2016-06.io.spdk:host1", 00:20:50.822 "psk": "key0", 00:20:50.822 "method": "nvmf_subsystem_add_host", 00:20:50.822 "req_id": 1 00:20:50.822 } 00:20:50.822 Got JSON-RPC error response 00:20:50.822 response: 00:20:50.822 { 00:20:50.822 "code": -32603, 00:20:50.822 "message": "Internal error" 00:20:50.822 } 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1141306 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1141306 ']' 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1141306 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141306 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141306' 00:20:50.822 killing process with pid 1141306 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1141306 00:20:50.822 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1141306 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zx8QaQ82uL 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1141603 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1141603 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1141603 ']' 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.083 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.083 [2024-12-06 19:19:01.608637] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:51.083 [2024-12-06 19:19:01.608773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.342 [2024-12-06 19:19:01.682055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.342 [2024-12-06 19:19:01.740888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.342 [2024-12-06 19:19:01.740971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.342 [2024-12-06 19:19:01.740986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.342 [2024-12-06 19:19:01.740998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.342 [2024-12-06 19:19:01.741023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.342 [2024-12-06 19:19:01.741597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zx8QaQ82uL 00:20:51.342 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.911 [2024-12-06 19:19:02.191022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.911 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.172 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:52.173 [2024-12-06 19:19:02.736503] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.173 [2024-12-06 19:19:02.736807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.432 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:52.691 malloc0 00:20:52.691 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:52.950 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:53.208 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1141888 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1141888 /var/tmp/bdevperf.sock 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1141888 ']' 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.467 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.467 [2024-12-06 19:19:03.899215] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:53.467 [2024-12-06 19:19:03.899290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141888 ] 00:20:53.467 [2024-12-06 19:19:03.967189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.467 [2024-12-06 19:19:04.028155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.726 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.726 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.726 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:20:53.984 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.242 [2024-12-06 19:19:04.700031] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.242 TLSTESTn1 00:20:54.242 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:54.807 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:54.807 "subsystems": [ 00:20:54.807 { 00:20:54.807 "subsystem": "keyring", 00:20:54.807 "config": [ 00:20:54.807 { 00:20:54.807 "method": "keyring_file_add_key", 00:20:54.807 "params": { 00:20:54.807 "name": "key0", 00:20:54.808 "path": "/tmp/tmp.zx8QaQ82uL" 00:20:54.808 } 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "iobuf", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "iobuf_set_options", 00:20:54.808 "params": { 00:20:54.808 "small_pool_count": 8192, 00:20:54.808 "large_pool_count": 1024, 00:20:54.808 "small_bufsize": 8192, 00:20:54.808 "large_bufsize": 135168, 00:20:54.808 "enable_numa": false 00:20:54.808 } 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "sock", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "sock_set_default_impl", 00:20:54.808 "params": { 00:20:54.808 "impl_name": "posix" 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "sock_impl_set_options", 00:20:54.808 "params": { 00:20:54.808 "impl_name": "ssl", 00:20:54.808 "recv_buf_size": 4096, 00:20:54.808 "send_buf_size": 4096, 00:20:54.808 "enable_recv_pipe": true, 00:20:54.808 "enable_quickack": false, 00:20:54.808 "enable_placement_id": 0, 00:20:54.808 "enable_zerocopy_send_server": true, 00:20:54.808 "enable_zerocopy_send_client": false, 00:20:54.808 "zerocopy_threshold": 0, 00:20:54.808 "tls_version": 0, 00:20:54.808 "enable_ktls": false 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "sock_impl_set_options", 00:20:54.808 "params": { 00:20:54.808 "impl_name": "posix", 00:20:54.808 "recv_buf_size": 2097152, 00:20:54.808 "send_buf_size": 2097152, 00:20:54.808 "enable_recv_pipe": true, 00:20:54.808 "enable_quickack": false, 00:20:54.808 "enable_placement_id": 0, 00:20:54.808 "enable_zerocopy_send_server": true, 00:20:54.808 "enable_zerocopy_send_client": false, 00:20:54.808 "zerocopy_threshold": 0, 00:20:54.808 "tls_version": 0, 00:20:54.808 "enable_ktls": false 00:20:54.808 } 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "vmd", 00:20:54.808 "config": [] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "accel", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "accel_set_options", 00:20:54.808 "params": { 00:20:54.808 "small_cache_size": 128, 00:20:54.808 "large_cache_size": 16, 00:20:54.808 "task_count": 2048, 00:20:54.808 "sequence_count": 2048, 00:20:54.808 "buf_count": 2048 00:20:54.808 } 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "bdev", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "bdev_set_options", 00:20:54.808 "params": { 00:20:54.808 "bdev_io_pool_size": 65535, 00:20:54.808 "bdev_io_cache_size": 256, 00:20:54.808 "bdev_auto_examine": true, 00:20:54.808 "iobuf_small_cache_size": 128, 00:20:54.808 "iobuf_large_cache_size": 16 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_raid_set_options", 00:20:54.808 "params": { 00:20:54.808 "process_window_size_kb": 1024, 00:20:54.808 "process_max_bandwidth_mb_sec": 0 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_iscsi_set_options", 00:20:54.808 "params": { 00:20:54.808 "timeout_sec": 30 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_nvme_set_options", 00:20:54.808 "params": { 00:20:54.808 "action_on_timeout": "none", 00:20:54.808 "timeout_us": 0, 00:20:54.808 "timeout_admin_us": 0, 00:20:54.808 "keep_alive_timeout_ms": 10000, 00:20:54.808 "arbitration_burst": 0, 00:20:54.808 "low_priority_weight": 0, 00:20:54.808 "medium_priority_weight": 0, 00:20:54.808 "high_priority_weight": 0, 00:20:54.808 "nvme_adminq_poll_period_us": 10000, 00:20:54.808 "nvme_ioq_poll_period_us": 0, 00:20:54.808 "io_queue_requests": 0, 00:20:54.808 "delay_cmd_submit": true, 00:20:54.808 "transport_retry_count": 4, 00:20:54.808 "bdev_retry_count": 3, 00:20:54.808 "transport_ack_timeout": 0, 00:20:54.808 "ctrlr_loss_timeout_sec": 0, 00:20:54.808 "reconnect_delay_sec": 0, 00:20:54.808 "fast_io_fail_timeout_sec": 0, 00:20:54.808 "disable_auto_failback": false, 00:20:54.808 "generate_uuids": false, 00:20:54.808 "transport_tos": 0, 00:20:54.808 "nvme_error_stat": false, 00:20:54.808 "rdma_srq_size": 0, 00:20:54.808 "io_path_stat": false, 00:20:54.808 "allow_accel_sequence": false, 00:20:54.808 "rdma_max_cq_size": 0, 00:20:54.808 "rdma_cm_event_timeout_ms": 0, 00:20:54.808 "dhchap_digests": [ 00:20:54.808 "sha256", 00:20:54.808 "sha384", 00:20:54.808 "sha512" 00:20:54.808 ], 00:20:54.808 "dhchap_dhgroups": [ 00:20:54.808 "null", 00:20:54.808 "ffdhe2048", 00:20:54.808 "ffdhe3072", 00:20:54.808 "ffdhe4096", 00:20:54.808 "ffdhe6144", 00:20:54.808 "ffdhe8192" 00:20:54.808 ], 00:20:54.808 "rdma_umr_per_io": false 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_nvme_set_hotplug", 00:20:54.808 "params": { 00:20:54.808 "period_us": 100000, 00:20:54.808 "enable": false 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_malloc_create", 00:20:54.808 "params": { 00:20:54.808 "name": "malloc0", 00:20:54.808 "num_blocks": 8192, 00:20:54.808 "block_size": 4096, 00:20:54.808 "physical_block_size": 4096, 00:20:54.808 "uuid": "65760ecf-bb2c-44d2-a37e-9840b720ea36", 00:20:54.808 "optimal_io_boundary": 0, 00:20:54.808 "md_size": 0, 00:20:54.808 "dif_type": 0, 00:20:54.808 "dif_is_head_of_md": false, 00:20:54.808 "dif_pi_format": 0 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "bdev_wait_for_examine" 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "nbd", 00:20:54.808 "config": [] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "scheduler", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "framework_set_scheduler", 00:20:54.808 "params": { 00:20:54.808 "name": "static" 00:20:54.808 } 00:20:54.808 } 00:20:54.808 ] 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "subsystem": "nvmf", 00:20:54.808 "config": [ 00:20:54.808 { 00:20:54.808 "method": "nvmf_set_config", 00:20:54.808 "params": { 00:20:54.808 "discovery_filter": "match_any", 00:20:54.808 "admin_cmd_passthru": { 00:20:54.808 "identify_ctrlr": false 00:20:54.808 }, 00:20:54.808 "dhchap_digests": [ 00:20:54.808 "sha256", 00:20:54.808 "sha384", 00:20:54.808 "sha512" 00:20:54.808 ], 00:20:54.808 "dhchap_dhgroups": [ 00:20:54.808 "null", 00:20:54.808 "ffdhe2048", 00:20:54.808 "ffdhe3072", 00:20:54.808 "ffdhe4096", 00:20:54.808 "ffdhe6144", 00:20:54.808 "ffdhe8192" 00:20:54.808 ] 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_set_max_subsystems", 00:20:54.808 "params": { 00:20:54.808 "max_subsystems": 1024 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_set_crdt", 00:20:54.808 "params": { 00:20:54.808 "crdt1": 0, 00:20:54.808 "crdt2": 0, 00:20:54.808 "crdt3": 0 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_create_transport", 00:20:54.808 "params": { 00:20:54.808 "trtype": "TCP", 00:20:54.808 "max_queue_depth": 128, 00:20:54.808 "max_io_qpairs_per_ctrlr": 127, 00:20:54.808 "in_capsule_data_size": 4096, 00:20:54.808 "max_io_size": 131072, 00:20:54.808 "io_unit_size": 131072, 00:20:54.808 "max_aq_depth": 128, 00:20:54.808 "num_shared_buffers": 511, 00:20:54.808 "buf_cache_size": 4294967295, 00:20:54.808 "dif_insert_or_strip": false, 00:20:54.808 "zcopy": false, 00:20:54.808 "c2h_success": false, 00:20:54.808 "sock_priority": 0, 00:20:54.808 "abort_timeout_sec": 1, 00:20:54.808 "ack_timeout": 0, 00:20:54.808 "data_wr_pool_size": 0 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_create_subsystem", 00:20:54.808 "params": { 00:20:54.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.808 "allow_any_host": false, 00:20:54.808 "serial_number": "SPDK00000000000001", 00:20:54.808 "model_number": "SPDK bdev Controller", 00:20:54.808 "max_namespaces": 10, 00:20:54.808 "min_cntlid": 1, 00:20:54.808 "max_cntlid": 65519, 00:20:54.808 "ana_reporting": false 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_subsystem_add_host", 00:20:54.808 "params": { 00:20:54.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.808 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.808 "psk": "key0" 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_subsystem_add_ns", 00:20:54.808 "params": { 00:20:54.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.808 "namespace": { 00:20:54.808 "nsid": 1, 00:20:54.808 "bdev_name": "malloc0", 00:20:54.808 "nguid": "65760ECFBB2C44D2A37E9840B720EA36", 00:20:54.808 "uuid": "65760ecf-bb2c-44d2-a37e-9840b720ea36", 00:20:54.808 "no_auto_visible": false 00:20:54.808 } 00:20:54.808 } 00:20:54.808 }, 00:20:54.808 { 00:20:54.808 "method": "nvmf_subsystem_add_listener", 00:20:54.808 "params": { 00:20:54.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.808 "listen_address": { 00:20:54.808 "trtype": "TCP", 00:20:54.808 "adrfam": "IPv4", 00:20:54.808 "traddr": "10.0.0.2", 00:20:54.808 "trsvcid": "4420" 00:20:54.809 }, 00:20:54.809 "secure_channel": true 00:20:54.809 } 00:20:54.809 } 00:20:54.809 ] 00:20:54.809 } 00:20:54.809 ] 00:20:54.809 }' 00:20:54.809 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:55.068 "subsystems": [ 00:20:55.068 { 00:20:55.068 "subsystem": "keyring", 00:20:55.068 "config": [ 00:20:55.068 { 00:20:55.068 "method": "keyring_file_add_key", 00:20:55.068 "params": { 00:20:55.068 "name": "key0", 00:20:55.068 "path": "/tmp/tmp.zx8QaQ82uL" 00:20:55.068 } 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "iobuf", 00:20:55.068 "config": [ 00:20:55.068 { 00:20:55.068 "method": "iobuf_set_options", 00:20:55.068 "params": { 00:20:55.068 "small_pool_count": 8192, 00:20:55.068 "large_pool_count": 1024, 00:20:55.068 "small_bufsize": 8192, 00:20:55.068 "large_bufsize": 135168, 00:20:55.068 "enable_numa": false 00:20:55.068 } 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "sock", 00:20:55.068 "config": [ 00:20:55.068 { 00:20:55.068 "method": "sock_set_default_impl", 00:20:55.068 "params": { 00:20:55.068 "impl_name": "posix" 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "sock_impl_set_options", 00:20:55.068 "params": { 00:20:55.068 "impl_name": "ssl", 00:20:55.068 "recv_buf_size": 4096, 00:20:55.068 "send_buf_size": 4096, 00:20:55.068 "enable_recv_pipe": true, 00:20:55.068 "enable_quickack": false, 00:20:55.068 "enable_placement_id": 0, 00:20:55.068 "enable_zerocopy_send_server": true, 00:20:55.068 "enable_zerocopy_send_client": false, 00:20:55.068 "zerocopy_threshold": 0, 00:20:55.068 "tls_version": 0, 00:20:55.068 "enable_ktls": false 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "sock_impl_set_options", 00:20:55.068 "params": { 00:20:55.068 "impl_name": "posix", 00:20:55.068 "recv_buf_size": 2097152, 00:20:55.068 "send_buf_size": 2097152, 00:20:55.068 "enable_recv_pipe": true, 00:20:55.068 "enable_quickack": false, 00:20:55.068 "enable_placement_id": 0, 00:20:55.068 "enable_zerocopy_send_server": true, 00:20:55.068 "enable_zerocopy_send_client": false, 00:20:55.068 "zerocopy_threshold": 0, 00:20:55.068 "tls_version": 0, 00:20:55.068 "enable_ktls": false 00:20:55.068 } 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "vmd", 00:20:55.068 "config": [] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "accel", 00:20:55.068 "config": [ 00:20:55.068 { 00:20:55.068 "method": "accel_set_options", 00:20:55.068 "params": { 00:20:55.068 "small_cache_size": 128, 00:20:55.068 "large_cache_size": 16, 00:20:55.068 "task_count": 2048, 00:20:55.068 "sequence_count": 2048, 00:20:55.068 "buf_count": 2048 00:20:55.068 } 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "bdev", 00:20:55.068 "config": [ 00:20:55.068 { 00:20:55.068 "method": "bdev_set_options", 00:20:55.068 "params": { 00:20:55.068 "bdev_io_pool_size": 65535, 00:20:55.068 "bdev_io_cache_size": 256, 00:20:55.068 "bdev_auto_examine": true, 00:20:55.068 "iobuf_small_cache_size": 128, 00:20:55.068 "iobuf_large_cache_size": 16 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_raid_set_options", 00:20:55.068 "params": { 00:20:55.068 "process_window_size_kb": 1024, 00:20:55.068 "process_max_bandwidth_mb_sec": 0 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_iscsi_set_options", 00:20:55.068 "params": { 00:20:55.068 "timeout_sec": 30 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_nvme_set_options", 00:20:55.068 "params": { 00:20:55.068 "action_on_timeout": "none", 00:20:55.068 "timeout_us": 0, 00:20:55.068 "timeout_admin_us": 0, 00:20:55.068 "keep_alive_timeout_ms": 10000, 00:20:55.068 "arbitration_burst": 0, 00:20:55.068 "low_priority_weight": 0, 00:20:55.068 "medium_priority_weight": 0, 00:20:55.068 "high_priority_weight": 0, 00:20:55.068 "nvme_adminq_poll_period_us": 10000, 00:20:55.068 "nvme_ioq_poll_period_us": 0, 00:20:55.068 "io_queue_requests": 512, 00:20:55.068 "delay_cmd_submit": true, 00:20:55.068 "transport_retry_count": 4, 00:20:55.068 "bdev_retry_count": 3, 00:20:55.068 "transport_ack_timeout": 0, 00:20:55.068 "ctrlr_loss_timeout_sec": 0, 00:20:55.068 "reconnect_delay_sec": 0, 00:20:55.068 "fast_io_fail_timeout_sec": 0, 00:20:55.068 "disable_auto_failback": false, 00:20:55.068 "generate_uuids": false, 00:20:55.068 "transport_tos": 0, 00:20:55.068 "nvme_error_stat": false, 00:20:55.068 "rdma_srq_size": 0, 00:20:55.068 "io_path_stat": false, 00:20:55.068 "allow_accel_sequence": false, 00:20:55.068 "rdma_max_cq_size": 0, 00:20:55.068 "rdma_cm_event_timeout_ms": 0, 00:20:55.068 "dhchap_digests": [ 00:20:55.068 "sha256", 00:20:55.068 "sha384", 00:20:55.068 "sha512" 00:20:55.068 ], 00:20:55.068 "dhchap_dhgroups": [ 00:20:55.068 "null", 00:20:55.068 "ffdhe2048", 00:20:55.068 "ffdhe3072", 00:20:55.068 "ffdhe4096", 00:20:55.068 "ffdhe6144", 00:20:55.068 "ffdhe8192" 00:20:55.068 ], 00:20:55.068 "rdma_umr_per_io": false 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_nvme_attach_controller", 00:20:55.068 "params": { 00:20:55.068 "name": "TLSTEST", 00:20:55.068 "trtype": "TCP", 00:20:55.068 "adrfam": "IPv4", 00:20:55.068 "traddr": "10.0.0.2", 00:20:55.068 "trsvcid": "4420", 00:20:55.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.068 "prchk_reftag": false, 00:20:55.068 "prchk_guard": false, 00:20:55.068 "ctrlr_loss_timeout_sec": 0, 00:20:55.068 "reconnect_delay_sec": 0, 00:20:55.068 "fast_io_fail_timeout_sec": 0, 00:20:55.068 "psk": "key0", 00:20:55.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.068 "hdgst": false, 00:20:55.068 "ddgst": false, 00:20:55.068 "multipath": "multipath" 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_nvme_set_hotplug", 00:20:55.068 "params": { 00:20:55.068 "period_us": 100000, 00:20:55.068 "enable": false 00:20:55.068 } 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "method": "bdev_wait_for_examine" 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }, 00:20:55.068 { 00:20:55.068 "subsystem": "nbd", 00:20:55.068 "config": [] 00:20:55.068 } 00:20:55.068 ] 00:20:55.068 }' 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1141888 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1141888 ']' 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1141888 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141888 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141888' 00:20:55.068 killing process with pid 1141888 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1141888 00:20:55.068 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.068 00:20:55.068 Latency(us) 00:20:55.068 [2024-12-06T18:19:05.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.068 [2024-12-06T18:19:05.645Z] =================================================================================================================== 00:20:55.068 [2024-12-06T18:19:05.645Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.068 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1141888 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1141603 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1141603 ']' 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1141603 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141603 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141603' 00:20:55.328 killing process with pid 1141603 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1141603 00:20:55.328 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1141603 00:20:55.586 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:55.586 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.586 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.586 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:55.586 "subsystems": [ 00:20:55.586 { 00:20:55.586 "subsystem": "keyring", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "keyring_file_add_key", 00:20:55.586 "params": { 00:20:55.586 "name": "key0", 00:20:55.586 "path": "/tmp/tmp.zx8QaQ82uL" 00:20:55.586 } 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "iobuf", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "iobuf_set_options", 00:20:55.586 "params": { 00:20:55.586 "small_pool_count": 8192, 00:20:55.586 "large_pool_count": 1024, 00:20:55.586 "small_bufsize": 8192, 00:20:55.586 "large_bufsize": 135168, 00:20:55.586 "enable_numa": false 00:20:55.586 } 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "sock", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "sock_set_default_impl", 00:20:55.586 "params": { 00:20:55.586 "impl_name": "posix" 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "sock_impl_set_options", 00:20:55.586 "params": { 00:20:55.586 "impl_name": "ssl", 00:20:55.586 "recv_buf_size": 4096, 00:20:55.586 "send_buf_size": 4096, 00:20:55.586 "enable_recv_pipe": true, 00:20:55.586 "enable_quickack": false, 00:20:55.586 "enable_placement_id": 0, 00:20:55.586 "enable_zerocopy_send_server": true, 00:20:55.586 "enable_zerocopy_send_client": false, 00:20:55.586 "zerocopy_threshold": 0, 00:20:55.586 "tls_version": 0, 00:20:55.586 "enable_ktls": false 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "sock_impl_set_options", 00:20:55.586 "params": { 00:20:55.586 "impl_name": "posix", 00:20:55.586 "recv_buf_size": 2097152, 00:20:55.586 "send_buf_size": 2097152, 00:20:55.586 "enable_recv_pipe": true, 00:20:55.586 "enable_quickack": false, 00:20:55.586 "enable_placement_id": 0, 00:20:55.586 "enable_zerocopy_send_server": true, 00:20:55.586 "enable_zerocopy_send_client": false, 00:20:55.586 "zerocopy_threshold": 0, 00:20:55.586 "tls_version": 0, 00:20:55.586 "enable_ktls": false 00:20:55.586 } 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "vmd", 00:20:55.586 "config": [] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "accel", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "accel_set_options", 00:20:55.586 "params": { 00:20:55.586 "small_cache_size": 128, 00:20:55.586 "large_cache_size": 16, 00:20:55.586 "task_count": 2048, 00:20:55.586 "sequence_count": 2048, 00:20:55.586 "buf_count": 2048 00:20:55.586 } 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "bdev", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "bdev_set_options", 00:20:55.586 "params": { 00:20:55.586 "bdev_io_pool_size": 65535, 00:20:55.586 "bdev_io_cache_size": 256, 00:20:55.586 "bdev_auto_examine": true, 00:20:55.586 "iobuf_small_cache_size": 128, 00:20:55.586 "iobuf_large_cache_size": 16 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_raid_set_options", 00:20:55.586 "params": { 00:20:55.586 "process_window_size_kb": 1024, 00:20:55.586 "process_max_bandwidth_mb_sec": 0 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_iscsi_set_options", 00:20:55.586 "params": { 00:20:55.586 "timeout_sec": 30 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_nvme_set_options", 00:20:55.586 "params": { 00:20:55.586 "action_on_timeout": "none", 00:20:55.586 "timeout_us": 0, 00:20:55.586 "timeout_admin_us": 0, 00:20:55.586 "keep_alive_timeout_ms": 10000, 00:20:55.586 "arbitration_burst": 0, 00:20:55.586 "low_priority_weight": 0, 00:20:55.586 "medium_priority_weight": 0, 00:20:55.586 "high_priority_weight": 0, 00:20:55.586 "nvme_adminq_poll_period_us": 10000, 00:20:55.586 "nvme_ioq_poll_period_us": 0, 00:20:55.586 "io_queue_requests": 0, 00:20:55.586 "delay_cmd_submit": true, 00:20:55.586 "transport_retry_count": 4, 00:20:55.586 "bdev_retry_count": 3, 00:20:55.586 "transport_ack_timeout": 0, 00:20:55.586 "ctrlr_loss_timeout_sec": 0, 00:20:55.586 "reconnect_delay_sec": 0, 00:20:55.586 "fast_io_fail_timeout_sec": 0, 00:20:55.586 "disable_auto_failback": false, 00:20:55.586 "generate_uuids": false, 00:20:55.586 "transport_tos": 0, 00:20:55.586 "nvme_error_stat": false, 00:20:55.586 "rdma_srq_size": 0, 00:20:55.586 "io_path_stat": false, 00:20:55.586 "allow_accel_sequence": false, 00:20:55.586 "rdma_max_cq_size": 0, 00:20:55.586 "rdma_cm_event_timeout_ms": 0, 00:20:55.586 "dhchap_digests": [ 00:20:55.586 "sha256", 00:20:55.586 "sha384", 00:20:55.586 "sha512" 00:20:55.586 ], 00:20:55.586 "dhchap_dhgroups": [ 00:20:55.586 "null", 00:20:55.586 "ffdhe2048", 00:20:55.586 "ffdhe3072", 00:20:55.586 "ffdhe4096", 00:20:55.586 "ffdhe6144", 00:20:55.586 "ffdhe8192" 00:20:55.586 ], 00:20:55.586 "rdma_umr_per_io": false 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_nvme_set_hotplug", 00:20:55.586 "params": { 00:20:55.586 "period_us": 100000, 00:20:55.586 "enable": false 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_malloc_create", 00:20:55.586 "params": { 00:20:55.586 "name": "malloc0", 00:20:55.586 "num_blocks": 8192, 00:20:55.586 "block_size": 4096, 00:20:55.586 "physical_block_size": 4096, 00:20:55.586 "uuid": "65760ecf-bb2c-44d2-a37e-9840b720ea36", 00:20:55.586 "optimal_io_boundary": 0, 00:20:55.586 "md_size": 0, 00:20:55.586 "dif_type": 0, 00:20:55.586 "dif_is_head_of_md": false, 00:20:55.586 "dif_pi_format": 0 00:20:55.586 } 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "method": "bdev_wait_for_examine" 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "nbd", 00:20:55.586 "config": [] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "scheduler", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "framework_set_scheduler", 00:20:55.586 "params": { 00:20:55.586 "name": "static" 00:20:55.586 } 00:20:55.586 } 00:20:55.586 ] 00:20:55.586 }, 00:20:55.586 { 00:20:55.586 "subsystem": "nvmf", 00:20:55.586 "config": [ 00:20:55.586 { 00:20:55.586 "method": "nvmf_set_config", 00:20:55.586 "params": { 00:20:55.586 "discovery_filter": "match_any", 00:20:55.586 "admin_cmd_passthru": { 00:20:55.586 "identify_ctrlr": false 00:20:55.586 }, 00:20:55.586 "dhchap_digests": [ 00:20:55.586 "sha256", 00:20:55.587 "sha384", 00:20:55.587 "sha512" 00:20:55.587 ], 00:20:55.587 "dhchap_dhgroups": [ 00:20:55.587 "null", 00:20:55.587 "ffdhe2048", 00:20:55.587 "ffdhe3072", 00:20:55.587 "ffdhe4096", 00:20:55.587 "ffdhe6144", 00:20:55.587 "ffdhe8192" 00:20:55.587 ] 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_set_max_subsystems", 00:20:55.587 "params": { 00:20:55.587 "max_subsystems": 1024 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_set_crdt", 00:20:55.587 "params": { 00:20:55.587 "crdt1": 0, 00:20:55.587 "crdt2": 0, 00:20:55.587 "crdt3": 0 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_create_transport", 00:20:55.587 "params": { 00:20:55.587 "trtype": "TCP", 00:20:55.587 "max_queue_depth": 128, 00:20:55.587 "max_io_qpairs_per_ctrlr": 127, 00:20:55.587 "in_capsule_data_size": 4096, 00:20:55.587 "max_io_size": 131072, 00:20:55.587 "io_unit_size": 131072, 00:20:55.587 "max_aq_depth": 128, 00:20:55.587 "num_shared_buffers": 511, 00:20:55.587 "buf_cache_size": 4294967295, 00:20:55.587 "dif_insert_or_strip": false, 00:20:55.587 "zcopy": false, 00:20:55.587 "c2h_success": false, 00:20:55.587 "sock_priority": 0, 00:20:55.587 "abort_timeout_sec": 1, 00:20:55.587 "ack_timeout": 0, 00:20:55.587 "data_wr_pool_size": 0 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_create_subsystem", 00:20:55.587 "params": { 00:20:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.587 "allow_any_host": false, 00:20:55.587 "serial_number": "SPDK00000000000001", 00:20:55.587 "model_number": "SPDK bdev Controller", 00:20:55.587 "max_namespaces": 10, 00:20:55.587 "min_cntlid": 1, 00:20:55.587 "max_cntlid": 65519, 00:20:55.587 "ana_reporting": false 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_subsystem_add_host", 00:20:55.587 "params": { 00:20:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.587 "host": "nqn.2016-06.io.spdk:host1", 00:20:55.587 "psk": "key0" 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_subsystem_add_ns", 00:20:55.587 "params": { 00:20:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.587 "namespace": { 00:20:55.587 "nsid": 1, 00:20:55.587 "bdev_name": "malloc0", 00:20:55.587 "nguid": "65760ECFBB2C44D2A37E9840B720EA36", 00:20:55.587 "uuid": "65760ecf-bb2c-44d2-a37e-9840b720ea36", 00:20:55.587 "no_auto_visible": false 00:20:55.587 } 00:20:55.587 } 00:20:55.587 }, 00:20:55.587 { 00:20:55.587 "method": "nvmf_subsystem_add_listener", 00:20:55.587 "params": { 00:20:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.587 "listen_address": { 00:20:55.587 "trtype": "TCP", 00:20:55.587 "adrfam": "IPv4", 00:20:55.587 "traddr": "10.0.0.2", 00:20:55.587 "trsvcid": "4420" 00:20:55.587 }, 00:20:55.587 "secure_channel": true 00:20:55.587 } 00:20:55.587 } 00:20:55.587 ] 00:20:55.587 } 00:20:55.587 ] 00:20:55.587 }' 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1142177 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1142177 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1142177 ']' 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.587 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.846 [2024-12-06 19:19:06.173801] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:55.846 [2024-12-06 19:19:06.173881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.846 [2024-12-06 19:19:06.243730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.846 [2024-12-06 19:19:06.300863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.846 [2024-12-06 19:19:06.300915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.846 [2024-12-06 19:19:06.300946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.846 [2024-12-06 19:19:06.300967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.846 [2024-12-06 19:19:06.300977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.846 [2024-12-06 19:19:06.301638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.106 [2024-12-06 19:19:06.545258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.106 [2024-12-06 19:19:06.577286] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.106 [2024-12-06 19:19:06.577559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.672 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.672 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.672 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.672 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.672 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1142325 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1142325 /var/tmp/bdevperf.sock 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1142325 ']' 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.931 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:56.931 "subsystems": [ 00:20:56.931 { 00:20:56.931 "subsystem": "keyring", 00:20:56.931 "config": [ 00:20:56.931 { 00:20:56.931 "method": "keyring_file_add_key", 00:20:56.931 "params": { 00:20:56.931 "name": "key0", 00:20:56.931 "path": "/tmp/tmp.zx8QaQ82uL" 00:20:56.931 } 00:20:56.931 } 00:20:56.931 ] 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "subsystem": "iobuf", 00:20:56.931 "config": [ 00:20:56.931 { 00:20:56.931 "method": "iobuf_set_options", 00:20:56.931 "params": { 00:20:56.931 "small_pool_count": 8192, 00:20:56.931 "large_pool_count": 1024, 00:20:56.931 "small_bufsize": 8192, 00:20:56.931 "large_bufsize": 135168, 00:20:56.931 "enable_numa": false 00:20:56.931 } 00:20:56.931 } 00:20:56.931 ] 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "subsystem": "sock", 00:20:56.931 "config": [ 00:20:56.931 { 00:20:56.931 "method": "sock_set_default_impl", 00:20:56.931 "params": { 00:20:56.931 "impl_name": "posix" 00:20:56.931 } 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "method": "sock_impl_set_options", 00:20:56.931 "params": { 00:20:56.931 "impl_name": "ssl", 00:20:56.931 "recv_buf_size": 4096, 00:20:56.931 "send_buf_size": 4096, 00:20:56.931 "enable_recv_pipe": true, 00:20:56.931 "enable_quickack": false, 00:20:56.931 "enable_placement_id": 0, 00:20:56.931 "enable_zerocopy_send_server": true, 00:20:56.931 "enable_zerocopy_send_client": false, 00:20:56.931 "zerocopy_threshold": 0, 00:20:56.931 "tls_version": 0, 00:20:56.931 "enable_ktls": false 00:20:56.931 } 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "method": "sock_impl_set_options", 00:20:56.931 "params": { 00:20:56.931 "impl_name": "posix", 00:20:56.931 "recv_buf_size": 2097152, 00:20:56.931 "send_buf_size": 2097152, 00:20:56.931 "enable_recv_pipe": true, 00:20:56.931 "enable_quickack": false, 00:20:56.931 "enable_placement_id": 0, 00:20:56.931 "enable_zerocopy_send_server": true, 00:20:56.931 "enable_zerocopy_send_client": false, 00:20:56.931 "zerocopy_threshold": 0, 00:20:56.931 "tls_version": 0, 00:20:56.931 "enable_ktls": false 00:20:56.931 } 00:20:56.931 } 00:20:56.931 ] 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "subsystem": "vmd", 00:20:56.931 "config": [] 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "subsystem": "accel", 00:20:56.931 "config": [ 00:20:56.931 { 00:20:56.931 "method": "accel_set_options", 00:20:56.931 "params": { 00:20:56.931 "small_cache_size": 128, 00:20:56.931 "large_cache_size": 16, 00:20:56.931 "task_count": 2048, 00:20:56.931 "sequence_count": 2048, 00:20:56.931 "buf_count": 2048 00:20:56.931 } 00:20:56.931 } 00:20:56.931 ] 00:20:56.931 }, 00:20:56.931 { 00:20:56.931 "subsystem": "bdev", 00:20:56.931 "config": [ 00:20:56.931 { 00:20:56.931 "method": "bdev_set_options", 00:20:56.931 "params": { 00:20:56.932 "bdev_io_pool_size": 65535, 00:20:56.932 "bdev_io_cache_size": 256, 00:20:56.932 "bdev_auto_examine": true, 00:20:56.932 "iobuf_small_cache_size": 128, 00:20:56.932 "iobuf_large_cache_size": 16 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_raid_set_options", 00:20:56.932 "params": { 00:20:56.932 "process_window_size_kb": 1024, 00:20:56.932 "process_max_bandwidth_mb_sec": 0 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_iscsi_set_options", 00:20:56.932 "params": { 00:20:56.932 "timeout_sec": 30 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_nvme_set_options", 00:20:56.932 "params": { 00:20:56.932 "action_on_timeout": "none", 00:20:56.932 "timeout_us": 0, 00:20:56.932 "timeout_admin_us": 0, 00:20:56.932 "keep_alive_timeout_ms": 10000, 00:20:56.932 "arbitration_burst": 0, 00:20:56.932 "low_priority_weight": 0, 00:20:56.932 "medium_priority_weight": 0, 00:20:56.932 "high_priority_weight": 0, 00:20:56.932 "nvme_adminq_poll_period_us": 10000, 00:20:56.932 "nvme_ioq_poll_period_us": 0, 00:20:56.932 "io_queue_requests": 512, 00:20:56.932 "delay_cmd_submit": true, 00:20:56.932 "transport_retry_count": 4, 00:20:56.932 "bdev_retry_count": 3, 00:20:56.932 "transport_ack_timeout": 0, 00:20:56.932 "ctrlr_loss_timeout_sec": 0, 00:20:56.932 "reconnect_delay_sec": 0, 00:20:56.932 "fast_io_fail_timeout_sec": 0, 00:20:56.932 "disable_auto_failback": false, 00:20:56.932 "generate_uuids": false, 00:20:56.932 "transport_tos": 0, 00:20:56.932 "nvme_error_stat": false, 00:20:56.932 "rdma_srq_size": 0, 00:20:56.932 "io_path_stat": false, 00:20:56.932 "allow_accel_sequence": false, 00:20:56.932 "rdma_max_cq_size": 0, 00:20:56.932 "rdma_cm_event_timeout_ms": 0, 00:20:56.932 "dhchap_digests": [ 00:20:56.932 "sha256", 00:20:56.932 "sha384", 00:20:56.932 "sha512" 00:20:56.932 ], 00:20:56.932 "dhchap_dhgroups": [ 00:20:56.932 "null", 00:20:56.932 "ffdhe2048", 00:20:56.932 "ffdhe3072", 00:20:56.932 "ffdhe4096", 00:20:56.932 "ffdhe6144", 00:20:56.932 "ffdhe8192" 00:20:56.932 ], 00:20:56.932 "rdma_umr_per_io": false 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_nvme_attach_controller", 00:20:56.932 "params": { 00:20:56.932 "name": "TLSTEST", 00:20:56.932 "trtype": "TCP", 00:20:56.932 "adrfam": "IPv4", 00:20:56.932 "traddr": "10.0.0.2", 00:20:56.932 "trsvcid": "4420", 00:20:56.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.932 "prchk_reftag": false, 00:20:56.932 "prchk_guard": false, 00:20:56.932 "ctrlr_loss_timeout_sec": 0, 00:20:56.932 "reconnect_delay_sec": 0, 00:20:56.932 "fast_io_fail_timeout_sec": 0, 00:20:56.932 "psk": "key0", 00:20:56.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.932 "hdgst": false, 00:20:56.932 "ddgst": false, 00:20:56.932 "multipath": "multipath" 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_nvme_set_hotplug", 00:20:56.932 "params": { 00:20:56.932 "period_us": 100000, 00:20:56.932 "enable": false 00:20:56.932 } 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "method": "bdev_wait_for_examine" 00:20:56.932 } 00:20:56.932 ] 00:20:56.932 }, 00:20:56.932 { 00:20:56.932 "subsystem": "nbd", 00:20:56.932 "config": [] 00:20:56.932 } 00:20:56.932 ] 00:20:56.932 }' 00:20:56.932 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.932 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.932 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.932 [2024-12-06 19:19:07.302107] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:56.932 [2024-12-06 19:19:07.302186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142325 ] 00:20:56.932 [2024-12-06 19:19:07.368617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.932 [2024-12-06 19:19:07.429048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.190 [2024-12-06 19:19:07.613859] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.190 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.190 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.190 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:57.448 Running I/O for 10 seconds... 00:20:59.323 3350.00 IOPS, 13.09 MiB/s [2024-12-06T18:19:11.279Z] 3428.50 IOPS, 13.39 MiB/s [2024-12-06T18:19:12.217Z] 3492.00 IOPS, 13.64 MiB/s [2024-12-06T18:19:13.178Z] 3518.25 IOPS, 13.74 MiB/s [2024-12-06T18:19:14.112Z] 3525.60 IOPS, 13.77 MiB/s [2024-12-06T18:19:15.161Z] 3533.00 IOPS, 13.80 MiB/s [2024-12-06T18:19:16.092Z] 3527.00 IOPS, 13.78 MiB/s [2024-12-06T18:19:17.022Z] 3522.75 IOPS, 13.76 MiB/s [2024-12-06T18:19:17.956Z] 3507.56 IOPS, 13.70 MiB/s [2024-12-06T18:19:17.956Z] 3514.30 IOPS, 13.73 MiB/s 00:21:07.379 Latency(us) 00:21:07.379 [2024-12-06T18:19:17.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.379 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:07.379 Verification LBA range: start 0x0 length 0x2000 00:21:07.379 TLSTESTn1 : 10.02 3518.83 13.75 0.00 0.00 36305.00 10388.67 35729.26 00:21:07.379 [2024-12-06T18:19:17.956Z] =================================================================================================================== 00:21:07.379 [2024-12-06T18:19:17.956Z] Total : 3518.83 13.75 0.00 0.00 36305.00 10388.67 35729.26 00:21:07.379 { 00:21:07.379 "results": [ 00:21:07.379 { 00:21:07.379 "job": "TLSTESTn1", 00:21:07.379 "core_mask": "0x4", 00:21:07.379 "workload": "verify", 00:21:07.379 "status": "finished", 00:21:07.379 "verify_range": { 00:21:07.379 "start": 0, 00:21:07.379 "length": 8192 00:21:07.379 }, 00:21:07.379 "queue_depth": 128, 00:21:07.379 "io_size": 4096, 00:21:07.379 "runtime": 10.022921, 00:21:07.379 "iops": 3518.834479489562, 00:21:07.379 "mibps": 13.745447185506102, 00:21:07.379 "io_failed": 0, 00:21:07.379 "io_timeout": 0, 00:21:07.379 "avg_latency_us": 36304.99553068848, 00:21:07.379 "min_latency_us": 10388.66962962963, 00:21:07.379 "max_latency_us": 35729.2562962963 00:21:07.379 } 00:21:07.379 ], 00:21:07.379 "core_count": 1 00:21:07.379 } 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1142325 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1142325 ']' 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1142325 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.379 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142325 00:21:07.638 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:07.638 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:07.638 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142325' 00:21:07.638 killing process with pid 1142325 00:21:07.638 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1142325 00:21:07.638 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.638 00:21:07.638 Latency(us) 00:21:07.638 [2024-12-06T18:19:18.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.638 [2024-12-06T18:19:18.215Z] =================================================================================================================== 00:21:07.638 [2024-12-06T18:19:18.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.638 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1142325 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1142177 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1142177 ']' 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1142177 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.638 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142177 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142177' 00:21:07.898 killing process with pid 1142177 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1142177 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1142177 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1143651 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1143651 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1143651 ']' 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.898 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.157 [2024-12-06 19:19:18.514418] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:08.157 [2024-12-06 19:19:18.514509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.157 [2024-12-06 19:19:18.584589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.157 [2024-12-06 19:19:18.635884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.157 [2024-12-06 19:19:18.635947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.157 [2024-12-06 19:19:18.635973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.157 [2024-12-06 19:19:18.635984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.157 [2024-12-06 19:19:18.635993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.157 [2024-12-06 19:19:18.636535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zx8QaQ82uL 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zx8QaQ82uL 00:21:08.416 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:08.674 [2024-12-06 19:19:19.036590] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.674 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:08.932 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.190 [2024-12-06 19:19:19.582149] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.190 [2024-12-06 19:19:19.582407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.190 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.448 malloc0 00:21:09.448 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:09.706 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:21:09.964 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1143941 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1143941 /var/tmp/bdevperf.sock 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1143941 ']' 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.222 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.222 [2024-12-06 19:19:20.750656] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:10.222 [2024-12-06 19:19:20.750758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143941 ] 00:21:10.479 [2024-12-06 19:19:20.820002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.479 [2024-12-06 19:19:20.880228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.479 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.479 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.479 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:21:10.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:10.995 [2024-12-06 19:19:21.533381] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.253 nvme0n1 00:21:11.253 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.253 Running I/O for 1 seconds... 00:21:12.187 3410.00 IOPS, 13.32 MiB/s 00:21:12.187 Latency(us) 00:21:12.187 [2024-12-06T18:19:22.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.187 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.187 Verification LBA range: start 0x0 length 0x2000 00:21:12.187 nvme0n1 : 1.02 3470.61 13.56 0.00 0.00 36526.47 7573.05 33981.63 00:21:12.187 [2024-12-06T18:19:22.764Z] =================================================================================================================== 00:21:12.187 [2024-12-06T18:19:22.764Z] Total : 3470.61 13.56 0.00 0.00 36526.47 7573.05 33981.63 00:21:12.187 { 00:21:12.187 "results": [ 00:21:12.187 { 00:21:12.187 "job": "nvme0n1", 00:21:12.187 "core_mask": "0x2", 00:21:12.187 "workload": "verify", 00:21:12.187 "status": "finished", 00:21:12.187 "verify_range": { 00:21:12.187 "start": 0, 00:21:12.187 "length": 8192 00:21:12.187 }, 00:21:12.187 "queue_depth": 128, 00:21:12.187 "io_size": 4096, 00:21:12.187 "runtime": 1.019706, 00:21:12.187 "iops": 3470.608194911082, 00:21:12.187 "mibps": 13.557063261371415, 00:21:12.187 "io_failed": 0, 00:21:12.187 "io_timeout": 0, 00:21:12.187 "avg_latency_us": 36526.47276213201, 00:21:12.187 "min_latency_us": 7573.0488888888885, 00:21:12.187 "max_latency_us": 33981.62962962963 00:21:12.187 } 00:21:12.187 ], 00:21:12.187 "core_count": 1 00:21:12.187 } 00:21:12.187 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1143941 00:21:12.187 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1143941 ']' 00:21:12.187 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1143941 00:21:12.187 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143941 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143941' 00:21:12.446 killing process with pid 1143941 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1143941 00:21:12.446 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.446 00:21:12.446 Latency(us) 00:21:12.446 [2024-12-06T18:19:23.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.446 [2024-12-06T18:19:23.023Z] =================================================================================================================== 00:21:12.446 [2024-12-06T18:19:23.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.446 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1143941 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1143651 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1143651 ']' 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1143651 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143651 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143651' 00:21:12.704 killing process with pid 1143651 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1143651 00:21:12.704 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1143651 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1144225 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1144225 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1144225 ']' 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.963 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.963 [2024-12-06 19:19:23.368507] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:12.963 [2024-12-06 19:19:23.368593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.963 [2024-12-06 19:19:23.442719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.963 [2024-12-06 19:19:23.498701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.963 [2024-12-06 19:19:23.498772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.963 [2024-12-06 19:19:23.498802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.963 [2024-12-06 19:19:23.498815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.963 [2024-12-06 19:19:23.498826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.963 [2024-12-06 19:19:23.499406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.220 [2024-12-06 19:19:23.640264] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.220 malloc0 00:21:13.220 [2024-12-06 19:19:23.671928] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.220 [2024-12-06 19:19:23.672237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1144360 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1144360 /var/tmp/bdevperf.sock 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1144360 ']' 00:21:13.220 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.221 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.221 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.221 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.221 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.221 [2024-12-06 19:19:23.743866] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:13.221 [2024-12-06 19:19:23.743932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144360 ] 00:21:13.478 [2024-12-06 19:19:23.808789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.478 [2024-12-06 19:19:23.865281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.478 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.478 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.478 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx8QaQ82uL 00:21:13.735 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:13.993 [2024-12-06 19:19:24.558623] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.251 nvme0n1 00:21:14.251 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.251 Running I/O for 1 seconds... 00:21:15.632 3259.00 IOPS, 12.73 MiB/s 00:21:15.632 Latency(us) 00:21:15.632 [2024-12-06T18:19:26.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.632 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:15.632 Verification LBA range: start 0x0 length 0x2000 00:21:15.632 nvme0n1 : 1.02 3323.12 12.98 0.00 0.00 38168.33 7330.32 51652.08 00:21:15.632 [2024-12-06T18:19:26.209Z] =================================================================================================================== 00:21:15.632 [2024-12-06T18:19:26.209Z] Total : 3323.12 12.98 0.00 0.00 38168.33 7330.32 51652.08 00:21:15.632 { 00:21:15.632 "results": [ 00:21:15.632 { 00:21:15.632 "job": "nvme0n1", 00:21:15.632 "core_mask": "0x2", 00:21:15.632 "workload": "verify", 00:21:15.632 "status": "finished", 00:21:15.632 "verify_range": { 00:21:15.632 "start": 0, 00:21:15.632 "length": 8192 00:21:15.632 }, 00:21:15.632 "queue_depth": 128, 00:21:15.632 "io_size": 4096, 00:21:15.632 "runtime": 1.019222, 00:21:15.632 "iops": 3323.1229310199346, 00:21:15.632 "mibps": 12.98094894929662, 00:21:15.632 "io_failed": 0, 00:21:15.632 "io_timeout": 0, 00:21:15.632 "avg_latency_us": 38168.33294710713, 00:21:15.632 "min_latency_us": 7330.322962962963, 00:21:15.632 "max_latency_us": 51652.07703703704 00:21:15.632 } 00:21:15.632 ], 00:21:15.632 "core_count": 1 00:21:15.632 } 00:21:15.632 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:15.632 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.632 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.632 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.632 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:15.632 "subsystems": [ 00:21:15.632 { 00:21:15.632 "subsystem": "keyring", 00:21:15.632 "config": [ 00:21:15.632 { 00:21:15.632 "method": "keyring_file_add_key", 00:21:15.632 "params": { 00:21:15.632 "name": "key0", 00:21:15.632 "path": "/tmp/tmp.zx8QaQ82uL" 00:21:15.632 } 00:21:15.632 } 00:21:15.632 ] 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "subsystem": "iobuf", 00:21:15.632 "config": [ 00:21:15.632 { 00:21:15.632 "method": "iobuf_set_options", 00:21:15.632 "params": { 00:21:15.632 "small_pool_count": 8192, 00:21:15.632 "large_pool_count": 1024, 00:21:15.632 "small_bufsize": 8192, 00:21:15.632 "large_bufsize": 135168, 00:21:15.632 "enable_numa": false 00:21:15.632 } 00:21:15.632 } 00:21:15.632 ] 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "subsystem": "sock", 00:21:15.632 "config": [ 00:21:15.632 { 00:21:15.632 "method": "sock_set_default_impl", 00:21:15.632 "params": { 00:21:15.632 "impl_name": "posix" 00:21:15.632 } 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "method": "sock_impl_set_options", 00:21:15.632 "params": { 00:21:15.632 "impl_name": "ssl", 00:21:15.632 "recv_buf_size": 4096, 00:21:15.632 "send_buf_size": 4096, 00:21:15.632 "enable_recv_pipe": true, 00:21:15.632 "enable_quickack": false, 00:21:15.632 "enable_placement_id": 0, 00:21:15.632 "enable_zerocopy_send_server": true, 00:21:15.632 "enable_zerocopy_send_client": false, 00:21:15.632 "zerocopy_threshold": 0, 00:21:15.632 "tls_version": 0, 00:21:15.632 "enable_ktls": false 00:21:15.632 } 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "method": "sock_impl_set_options", 00:21:15.632 "params": { 00:21:15.632 "impl_name": "posix", 00:21:15.632 "recv_buf_size": 2097152, 00:21:15.632 "send_buf_size": 2097152, 00:21:15.632 "enable_recv_pipe": true, 00:21:15.632 "enable_quickack": false, 00:21:15.632 "enable_placement_id": 0, 00:21:15.632 "enable_zerocopy_send_server": true, 00:21:15.632 "enable_zerocopy_send_client": false, 00:21:15.632 "zerocopy_threshold": 0, 00:21:15.632 "tls_version": 0, 00:21:15.632 "enable_ktls": false 00:21:15.632 } 00:21:15.632 } 00:21:15.632 ] 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "subsystem": "vmd", 00:21:15.632 "config": [] 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "subsystem": "accel", 00:21:15.632 "config": [ 00:21:15.632 { 00:21:15.632 "method": "accel_set_options", 00:21:15.632 "params": { 00:21:15.632 "small_cache_size": 128, 00:21:15.632 "large_cache_size": 16, 00:21:15.632 "task_count": 2048, 00:21:15.632 "sequence_count": 2048, 00:21:15.632 "buf_count": 2048 00:21:15.632 } 00:21:15.632 } 00:21:15.632 ] 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "subsystem": "bdev", 00:21:15.632 "config": [ 00:21:15.632 { 00:21:15.632 "method": "bdev_set_options", 00:21:15.632 "params": { 00:21:15.632 "bdev_io_pool_size": 65535, 00:21:15.632 "bdev_io_cache_size": 256, 00:21:15.632 "bdev_auto_examine": true, 00:21:15.632 "iobuf_small_cache_size": 128, 00:21:15.632 "iobuf_large_cache_size": 16 00:21:15.632 } 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "method": "bdev_raid_set_options", 00:21:15.632 "params": { 00:21:15.632 "process_window_size_kb": 1024, 00:21:15.632 "process_max_bandwidth_mb_sec": 0 00:21:15.632 } 00:21:15.632 }, 00:21:15.632 { 00:21:15.632 "method": "bdev_iscsi_set_options", 00:21:15.632 "params": { 00:21:15.632 "timeout_sec": 30 00:21:15.632 } 00:21:15.632 }, 00:21:15.632 { 00:21:15.633 "method": "bdev_nvme_set_options", 00:21:15.633 "params": { 00:21:15.633 "action_on_timeout": "none", 00:21:15.633 "timeout_us": 0, 00:21:15.633 "timeout_admin_us": 0, 00:21:15.633 "keep_alive_timeout_ms": 10000, 00:21:15.633 "arbitration_burst": 0, 00:21:15.633 "low_priority_weight": 0, 00:21:15.633 "medium_priority_weight": 0, 00:21:15.633 "high_priority_weight": 0, 00:21:15.633 "nvme_adminq_poll_period_us": 10000, 00:21:15.633 "nvme_ioq_poll_period_us": 0, 00:21:15.633 "io_queue_requests": 0, 00:21:15.633 "delay_cmd_submit": true, 00:21:15.633 "transport_retry_count": 4, 00:21:15.633 "bdev_retry_count": 3, 00:21:15.633 "transport_ack_timeout": 0, 00:21:15.633 "ctrlr_loss_timeout_sec": 0, 00:21:15.633 "reconnect_delay_sec": 0, 00:21:15.633 "fast_io_fail_timeout_sec": 0, 00:21:15.633 "disable_auto_failback": false, 00:21:15.633 "generate_uuids": false, 00:21:15.633 "transport_tos": 0, 00:21:15.633 "nvme_error_stat": false, 00:21:15.633 "rdma_srq_size": 0, 00:21:15.633 "io_path_stat": false, 00:21:15.633 "allow_accel_sequence": false, 00:21:15.633 "rdma_max_cq_size": 0, 00:21:15.633 "rdma_cm_event_timeout_ms": 0, 00:21:15.633 "dhchap_digests": [ 00:21:15.633 "sha256", 00:21:15.633 "sha384", 00:21:15.633 "sha512" 00:21:15.633 ], 00:21:15.633 "dhchap_dhgroups": [ 00:21:15.633 "null", 00:21:15.633 "ffdhe2048", 00:21:15.633 "ffdhe3072", 00:21:15.633 "ffdhe4096", 00:21:15.633 "ffdhe6144", 00:21:15.633 "ffdhe8192" 00:21:15.633 ], 00:21:15.633 "rdma_umr_per_io": false 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "bdev_nvme_set_hotplug", 00:21:15.633 "params": { 00:21:15.633 "period_us": 100000, 00:21:15.633 "enable": false 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "bdev_malloc_create", 00:21:15.633 "params": { 00:21:15.633 "name": "malloc0", 00:21:15.633 "num_blocks": 8192, 00:21:15.633 "block_size": 4096, 00:21:15.633 "physical_block_size": 4096, 00:21:15.633 "uuid": "bf08f407-0881-4bd2-89d0-2e394a20ed20", 00:21:15.633 "optimal_io_boundary": 0, 00:21:15.633 "md_size": 0, 00:21:15.633 "dif_type": 0, 00:21:15.633 "dif_is_head_of_md": false, 00:21:15.633 "dif_pi_format": 0 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "bdev_wait_for_examine" 00:21:15.633 } 00:21:15.633 ] 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "subsystem": "nbd", 00:21:15.633 "config": [] 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "subsystem": "scheduler", 00:21:15.633 "config": [ 00:21:15.633 { 00:21:15.633 "method": "framework_set_scheduler", 00:21:15.633 "params": { 00:21:15.633 "name": "static" 00:21:15.633 } 00:21:15.633 } 00:21:15.633 ] 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "subsystem": "nvmf", 00:21:15.633 "config": [ 00:21:15.633 { 00:21:15.633 "method": "nvmf_set_config", 00:21:15.633 "params": { 00:21:15.633 "discovery_filter": "match_any", 00:21:15.633 "admin_cmd_passthru": { 00:21:15.633 "identify_ctrlr": false 00:21:15.633 }, 00:21:15.633 "dhchap_digests": [ 00:21:15.633 "sha256", 00:21:15.633 "sha384", 00:21:15.633 "sha512" 00:21:15.633 ], 00:21:15.633 "dhchap_dhgroups": [ 00:21:15.633 "null", 00:21:15.633 "ffdhe2048", 00:21:15.633 "ffdhe3072", 00:21:15.633 "ffdhe4096", 00:21:15.633 "ffdhe6144", 00:21:15.633 "ffdhe8192" 00:21:15.633 ] 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_set_max_subsystems", 00:21:15.633 "params": { 00:21:15.633 "max_subsystems": 1024 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_set_crdt", 00:21:15.633 "params": { 00:21:15.633 "crdt1": 0, 00:21:15.633 "crdt2": 0, 00:21:15.633 "crdt3": 0 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_create_transport", 00:21:15.633 "params": { 00:21:15.633 "trtype": "TCP", 00:21:15.633 "max_queue_depth": 128, 00:21:15.633 "max_io_qpairs_per_ctrlr": 127, 00:21:15.633 "in_capsule_data_size": 4096, 00:21:15.633 "max_io_size": 131072, 00:21:15.633 "io_unit_size": 131072, 00:21:15.633 "max_aq_depth": 128, 00:21:15.633 "num_shared_buffers": 511, 00:21:15.633 "buf_cache_size": 4294967295, 00:21:15.633 "dif_insert_or_strip": false, 00:21:15.633 "zcopy": false, 00:21:15.633 "c2h_success": false, 00:21:15.633 "sock_priority": 0, 00:21:15.633 "abort_timeout_sec": 1, 00:21:15.633 "ack_timeout": 0, 00:21:15.633 "data_wr_pool_size": 0 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_create_subsystem", 00:21:15.633 "params": { 00:21:15.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.633 "allow_any_host": false, 00:21:15.633 "serial_number": "00000000000000000000", 00:21:15.633 "model_number": "SPDK bdev Controller", 00:21:15.633 "max_namespaces": 32, 00:21:15.633 "min_cntlid": 1, 00:21:15.633 "max_cntlid": 65519, 00:21:15.633 "ana_reporting": false 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_subsystem_add_host", 00:21:15.633 "params": { 00:21:15.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.633 "host": "nqn.2016-06.io.spdk:host1", 00:21:15.633 "psk": "key0" 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_subsystem_add_ns", 00:21:15.633 "params": { 00:21:15.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.633 "namespace": { 00:21:15.633 "nsid": 1, 00:21:15.633 "bdev_name": "malloc0", 00:21:15.633 "nguid": "BF08F40708814BD289D02E394A20ED20", 00:21:15.633 "uuid": "bf08f407-0881-4bd2-89d0-2e394a20ed20", 00:21:15.633 "no_auto_visible": false 00:21:15.633 } 00:21:15.633 } 00:21:15.633 }, 00:21:15.633 { 00:21:15.633 "method": "nvmf_subsystem_add_listener", 00:21:15.633 "params": { 00:21:15.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.633 "listen_address": { 00:21:15.633 "trtype": "TCP", 00:21:15.633 "adrfam": "IPv4", 00:21:15.633 "traddr": "10.0.0.2", 00:21:15.633 "trsvcid": "4420" 00:21:15.633 }, 00:21:15.633 "secure_channel": false, 00:21:15.633 "sock_impl": "ssl" 00:21:15.633 } 00:21:15.633 } 00:21:15.633 ] 00:21:15.633 } 00:21:15.633 ] 00:21:15.633 }' 00:21:15.633 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:15.890 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:15.890 "subsystems": [ 00:21:15.890 { 00:21:15.890 "subsystem": "keyring", 00:21:15.890 "config": [ 00:21:15.890 { 00:21:15.890 "method": "keyring_file_add_key", 00:21:15.890 "params": { 00:21:15.890 "name": "key0", 00:21:15.890 "path": "/tmp/tmp.zx8QaQ82uL" 00:21:15.890 } 00:21:15.890 } 00:21:15.890 ] 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "subsystem": "iobuf", 00:21:15.890 "config": [ 00:21:15.890 { 00:21:15.890 "method": "iobuf_set_options", 00:21:15.890 "params": { 00:21:15.890 "small_pool_count": 8192, 00:21:15.890 "large_pool_count": 1024, 00:21:15.890 "small_bufsize": 8192, 00:21:15.890 "large_bufsize": 135168, 00:21:15.890 "enable_numa": false 00:21:15.890 } 00:21:15.890 } 00:21:15.890 ] 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "subsystem": "sock", 00:21:15.890 "config": [ 00:21:15.890 { 00:21:15.890 "method": "sock_set_default_impl", 00:21:15.890 "params": { 00:21:15.890 "impl_name": "posix" 00:21:15.890 } 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "method": "sock_impl_set_options", 00:21:15.890 "params": { 00:21:15.890 "impl_name": "ssl", 00:21:15.890 "recv_buf_size": 4096, 00:21:15.890 "send_buf_size": 4096, 00:21:15.890 "enable_recv_pipe": true, 00:21:15.890 "enable_quickack": false, 00:21:15.890 "enable_placement_id": 0, 00:21:15.890 "enable_zerocopy_send_server": true, 00:21:15.890 "enable_zerocopy_send_client": false, 00:21:15.890 "zerocopy_threshold": 0, 00:21:15.890 "tls_version": 0, 00:21:15.890 "enable_ktls": false 00:21:15.890 } 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "method": "sock_impl_set_options", 00:21:15.890 "params": { 00:21:15.890 "impl_name": "posix", 00:21:15.890 "recv_buf_size": 2097152, 00:21:15.890 "send_buf_size": 2097152, 00:21:15.890 "enable_recv_pipe": true, 00:21:15.890 "enable_quickack": false, 00:21:15.890 "enable_placement_id": 0, 00:21:15.890 "enable_zerocopy_send_server": true, 00:21:15.890 "enable_zerocopy_send_client": false, 00:21:15.890 "zerocopy_threshold": 0, 00:21:15.890 "tls_version": 0, 00:21:15.890 "enable_ktls": false 00:21:15.890 } 00:21:15.890 } 00:21:15.890 ] 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "subsystem": "vmd", 00:21:15.890 "config": [] 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "subsystem": "accel", 00:21:15.890 "config": [ 00:21:15.890 { 00:21:15.890 "method": "accel_set_options", 00:21:15.890 "params": { 00:21:15.890 "small_cache_size": 128, 00:21:15.890 "large_cache_size": 16, 00:21:15.890 "task_count": 2048, 00:21:15.890 "sequence_count": 2048, 00:21:15.890 "buf_count": 2048 00:21:15.890 } 00:21:15.890 } 00:21:15.890 ] 00:21:15.890 }, 00:21:15.890 { 00:21:15.890 "subsystem": "bdev", 00:21:15.890 "config": [ 00:21:15.890 { 00:21:15.890 "method": "bdev_set_options", 00:21:15.891 "params": { 00:21:15.891 "bdev_io_pool_size": 65535, 00:21:15.891 "bdev_io_cache_size": 256, 00:21:15.891 "bdev_auto_examine": true, 00:21:15.891 "iobuf_small_cache_size": 128, 00:21:15.891 "iobuf_large_cache_size": 16 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_raid_set_options", 00:21:15.891 "params": { 00:21:15.891 "process_window_size_kb": 1024, 00:21:15.891 "process_max_bandwidth_mb_sec": 0 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_iscsi_set_options", 00:21:15.891 "params": { 00:21:15.891 "timeout_sec": 30 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_nvme_set_options", 00:21:15.891 "params": { 00:21:15.891 "action_on_timeout": "none", 00:21:15.891 "timeout_us": 0, 00:21:15.891 "timeout_admin_us": 0, 00:21:15.891 "keep_alive_timeout_ms": 10000, 00:21:15.891 "arbitration_burst": 0, 00:21:15.891 "low_priority_weight": 0, 00:21:15.891 "medium_priority_weight": 0, 00:21:15.891 "high_priority_weight": 0, 00:21:15.891 "nvme_adminq_poll_period_us": 10000, 00:21:15.891 "nvme_ioq_poll_period_us": 0, 00:21:15.891 "io_queue_requests": 512, 00:21:15.891 "delay_cmd_submit": true, 00:21:15.891 "transport_retry_count": 4, 00:21:15.891 "bdev_retry_count": 3, 00:21:15.891 "transport_ack_timeout": 0, 00:21:15.891 "ctrlr_loss_timeout_sec": 0, 00:21:15.891 "reconnect_delay_sec": 0, 00:21:15.891 "fast_io_fail_timeout_sec": 0, 00:21:15.891 "disable_auto_failback": false, 00:21:15.891 "generate_uuids": false, 00:21:15.891 "transport_tos": 0, 00:21:15.891 "nvme_error_stat": false, 00:21:15.891 "rdma_srq_size": 0, 00:21:15.891 "io_path_stat": false, 00:21:15.891 "allow_accel_sequence": false, 00:21:15.891 "rdma_max_cq_size": 0, 00:21:15.891 "rdma_cm_event_timeout_ms": 0, 00:21:15.891 "dhchap_digests": [ 00:21:15.891 "sha256", 00:21:15.891 "sha384", 00:21:15.891 "sha512" 00:21:15.891 ], 00:21:15.891 "dhchap_dhgroups": [ 00:21:15.891 "null", 00:21:15.891 "ffdhe2048", 00:21:15.891 "ffdhe3072", 00:21:15.891 "ffdhe4096", 00:21:15.891 "ffdhe6144", 00:21:15.891 "ffdhe8192" 00:21:15.891 ], 00:21:15.891 "rdma_umr_per_io": false 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_nvme_attach_controller", 00:21:15.891 "params": { 00:21:15.891 "name": "nvme0", 00:21:15.891 "trtype": "TCP", 00:21:15.891 "adrfam": "IPv4", 00:21:15.891 "traddr": "10.0.0.2", 00:21:15.891 "trsvcid": "4420", 00:21:15.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.891 "prchk_reftag": false, 00:21:15.891 "prchk_guard": false, 00:21:15.891 "ctrlr_loss_timeout_sec": 0, 00:21:15.891 "reconnect_delay_sec": 0, 00:21:15.891 "fast_io_fail_timeout_sec": 0, 00:21:15.891 "psk": "key0", 00:21:15.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.891 "hdgst": false, 00:21:15.891 "ddgst": false, 00:21:15.891 "multipath": "multipath" 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_nvme_set_hotplug", 00:21:15.891 "params": { 00:21:15.891 "period_us": 100000, 00:21:15.891 "enable": false 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_enable_histogram", 00:21:15.891 "params": { 00:21:15.891 "name": "nvme0n1", 00:21:15.891 "enable": true 00:21:15.891 } 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "method": "bdev_wait_for_examine" 00:21:15.891 } 00:21:15.891 ] 00:21:15.891 }, 00:21:15.891 { 00:21:15.891 "subsystem": "nbd", 00:21:15.891 "config": [] 00:21:15.891 } 00:21:15.891 ] 00:21:15.891 }' 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1144360 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1144360 ']' 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1144360 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1144360 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1144360' 00:21:15.891 killing process with pid 1144360 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1144360 00:21:15.891 Received shutdown signal, test time was about 1.000000 seconds 00:21:15.891 00:21:15.891 Latency(us) 00:21:15.891 [2024-12-06T18:19:26.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.891 [2024-12-06T18:19:26.468Z] =================================================================================================================== 00:21:15.891 [2024-12-06T18:19:26.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.891 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1144360 00:21:16.148 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1144225 00:21:16.148 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1144225 ']' 00:21:16.148 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1144225 00:21:16.148 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.148 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1144225 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1144225' 00:21:16.149 killing process with pid 1144225 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1144225 00:21:16.149 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1144225 00:21:16.407 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:16.407 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.407 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:16.407 "subsystems": [ 00:21:16.407 { 00:21:16.407 "subsystem": "keyring", 00:21:16.407 "config": [ 00:21:16.407 { 00:21:16.407 "method": "keyring_file_add_key", 00:21:16.407 "params": { 00:21:16.407 "name": "key0", 00:21:16.407 "path": "/tmp/tmp.zx8QaQ82uL" 00:21:16.407 } 00:21:16.407 } 00:21:16.407 ] 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "subsystem": "iobuf", 00:21:16.407 "config": [ 00:21:16.407 { 00:21:16.407 "method": "iobuf_set_options", 00:21:16.407 "params": { 00:21:16.407 "small_pool_count": 8192, 00:21:16.407 "large_pool_count": 1024, 00:21:16.407 "small_bufsize": 8192, 00:21:16.407 "large_bufsize": 135168, 00:21:16.407 "enable_numa": false 00:21:16.407 } 00:21:16.407 } 00:21:16.407 ] 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "subsystem": "sock", 00:21:16.407 "config": [ 00:21:16.407 { 00:21:16.407 "method": "sock_set_default_impl", 00:21:16.407 "params": { 00:21:16.407 "impl_name": "posix" 00:21:16.407 } 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "method": "sock_impl_set_options", 00:21:16.407 "params": { 00:21:16.407 "impl_name": "ssl", 00:21:16.407 "recv_buf_size": 4096, 00:21:16.407 "send_buf_size": 4096, 00:21:16.407 "enable_recv_pipe": true, 00:21:16.407 "enable_quickack": false, 00:21:16.407 "enable_placement_id": 0, 00:21:16.407 "enable_zerocopy_send_server": true, 00:21:16.407 "enable_zerocopy_send_client": false, 00:21:16.407 "zerocopy_threshold": 0, 00:21:16.407 "tls_version": 0, 00:21:16.407 "enable_ktls": false 00:21:16.407 } 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "method": "sock_impl_set_options", 00:21:16.407 "params": { 00:21:16.407 "impl_name": "posix", 00:21:16.407 "recv_buf_size": 2097152, 00:21:16.407 "send_buf_size": 2097152, 00:21:16.407 "enable_recv_pipe": true, 00:21:16.407 "enable_quickack": false, 00:21:16.407 "enable_placement_id": 0, 00:21:16.407 "enable_zerocopy_send_server": true, 00:21:16.407 "enable_zerocopy_send_client": false, 00:21:16.407 "zerocopy_threshold": 0, 00:21:16.407 "tls_version": 0, 00:21:16.407 "enable_ktls": false 00:21:16.407 } 00:21:16.407 } 00:21:16.407 ] 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "subsystem": "vmd", 00:21:16.407 "config": [] 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "subsystem": "accel", 00:21:16.407 "config": [ 00:21:16.407 { 00:21:16.407 "method": "accel_set_options", 00:21:16.407 "params": { 00:21:16.407 "small_cache_size": 128, 00:21:16.407 "large_cache_size": 16, 00:21:16.407 "task_count": 2048, 00:21:16.407 "sequence_count": 2048, 00:21:16.407 "buf_count": 2048 00:21:16.407 } 00:21:16.407 } 00:21:16.407 ] 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "subsystem": "bdev", 00:21:16.407 "config": [ 00:21:16.407 { 00:21:16.407 "method": "bdev_set_options", 00:21:16.407 "params": { 00:21:16.407 "bdev_io_pool_size": 65535, 00:21:16.407 "bdev_io_cache_size": 256, 00:21:16.407 "bdev_auto_examine": true, 00:21:16.407 "iobuf_small_cache_size": 128, 00:21:16.407 "iobuf_large_cache_size": 16 00:21:16.407 } 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "method": "bdev_raid_set_options", 00:21:16.407 "params": { 00:21:16.407 "process_window_size_kb": 1024, 00:21:16.407 "process_max_bandwidth_mb_sec": 0 00:21:16.407 } 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "method": "bdev_iscsi_set_options", 00:21:16.407 "params": { 00:21:16.407 "timeout_sec": 30 00:21:16.407 } 00:21:16.407 }, 00:21:16.407 { 00:21:16.407 "method": "bdev_nvme_set_options", 00:21:16.407 "params": { 00:21:16.407 "action_on_timeout": "none", 00:21:16.407 "timeout_us": 0, 00:21:16.407 "timeout_admin_us": 0, 00:21:16.407 "keep_alive_timeout_ms": 10000, 00:21:16.407 "arbitration_burst": 0, 00:21:16.407 "low_priority_weight": 0, 00:21:16.407 "medium_priority_weight": 0, 00:21:16.407 "high_priority_weight": 0, 00:21:16.408 "nvme_adminq_poll_period_us": 10000, 00:21:16.408 "nvme_ioq_poll_period_us": 0, 00:21:16.408 "io_queue_requests": 0, 00:21:16.408 "delay_cmd_submit": true, 00:21:16.408 "transport_retry_count": 4, 00:21:16.408 "bdev_retry_count": 3, 00:21:16.408 "transport_ack_timeout": 0, 00:21:16.408 "ctrlr_loss_timeout_sec": 0, 00:21:16.408 "reconnect_delay_sec": 0, 00:21:16.408 "fast_io_fail_timeout_sec": 0, 00:21:16.408 "disable_auto_failback": false, 00:21:16.408 "generate_uuids": false, 00:21:16.408 "transport_tos": 0, 00:21:16.408 "nvme_error_stat": false, 00:21:16.408 "rdma_srq_size": 0, 00:21:16.408 "io_path_stat": false, 00:21:16.408 "allow_accel_sequence": false, 00:21:16.408 "rdma_max_cq_size": 0, 00:21:16.408 "rdma_cm_event_timeout_ms": 0, 00:21:16.408 "dhchap_digests": [ 00:21:16.408 "sha256", 00:21:16.408 "sha384", 00:21:16.408 "sha512" 00:21:16.408 ], 00:21:16.408 "dhchap_dhgroups": [ 00:21:16.408 "null", 00:21:16.408 "ffdhe2048", 00:21:16.408 "ffdhe3072", 00:21:16.408 "ffdhe4096", 00:21:16.408 "ffdhe6144", 00:21:16.408 "ffdhe8192" 00:21:16.408 ], 00:21:16.408 "rdma_umr_per_io": false 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "bdev_nvme_set_hotplug", 00:21:16.408 "params": { 00:21:16.408 "period_us": 100000, 00:21:16.408 "enable": false 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "bdev_malloc_create", 00:21:16.408 "params": { 00:21:16.408 "name": "malloc0", 00:21:16.408 "num_blocks": 8192, 00:21:16.408 "block_size": 4096, 00:21:16.408 "physical_block_size": 4096, 00:21:16.408 "uuid": "bf08f407-0881-4bd2-89d0-2e394a20ed20", 00:21:16.408 "optimal_io_boundary": 0, 00:21:16.408 "md_size": 0, 00:21:16.408 "dif_type": 0, 00:21:16.408 "dif_is_head_of_md": false, 00:21:16.408 "dif_pi_format": 0 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "bdev_wait_for_examine" 00:21:16.408 } 00:21:16.408 ] 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "subsystem": "nbd", 00:21:16.408 "config": [] 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "subsystem": "scheduler", 00:21:16.408 "config": [ 00:21:16.408 { 00:21:16.408 "method": "framework_set_scheduler", 00:21:16.408 "params": { 00:21:16.408 "name": "static" 00:21:16.408 } 00:21:16.408 } 00:21:16.408 ] 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "subsystem": "nvmf", 00:21:16.408 "config": [ 00:21:16.408 { 00:21:16.408 "method": "nvmf_set_config", 00:21:16.408 "params": { 00:21:16.408 "discovery_filter": "match_any", 00:21:16.408 "admin_cmd_passthru": { 00:21:16.408 "identify_ctrlr": false 00:21:16.408 }, 00:21:16.408 "dhchap_digests": [ 00:21:16.408 "sha256", 00:21:16.408 "sha384", 00:21:16.408 "sha512" 00:21:16.408 ], 00:21:16.408 "dhchap_dhgroups": [ 00:21:16.408 "null", 00:21:16.408 "ffdhe2048", 00:21:16.408 "ffdhe3072", 00:21:16.408 "ffdhe4096", 00:21:16.408 "ffdhe6144", 00:21:16.408 "ffdhe8192" 00:21:16.408 ] 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_set_max_subsystems", 00:21:16.408 "params": { 00:21:16.408 "max_subsystems": 1024 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_set_crdt", 00:21:16.408 "params": { 00:21:16.408 "crdt1": 0, 00:21:16.408 "crdt2": 0, 00:21:16.408 "crdt3": 0 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_create_transport", 00:21:16.408 "params": { 00:21:16.408 "trtype": "TCP", 00:21:16.408 "max_queue_depth": 128, 00:21:16.408 "max_io_qpairs_per_ctrlr": 127, 00:21:16.408 "in_capsule_data_size": 4096, 00:21:16.408 "max_io_size": 131072, 00:21:16.408 "io_unit_size": 131072, 00:21:16.408 "max_aq_depth": 128, 00:21:16.408 "num_shared_buffers": 511, 00:21:16.408 "buf_cache_size": 4294967295, 00:21:16.408 "dif_insert_or_strip": false, 00:21:16.408 "zcopy": false, 00:21:16.408 "c2h_success": false, 00:21:16.408 "sock_priority": 0, 00:21:16.408 "abort_timeout_sec": 1, 00:21:16.408 "ack_timeout": 0, 00:21:16.408 "data_wr_pool_size": 0 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_create_subsystem", 00:21:16.408 "params": { 00:21:16.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.408 "allow_any_host": false, 00:21:16.408 "serial_number": "00000000000000000000", 00:21:16.408 "model_number": "SPDK bdev Controller", 00:21:16.408 "max_namespaces": 32, 00:21:16.408 "min_cntlid": 1, 00:21:16.408 "max_cntlid": 65519, 00:21:16.408 "ana_reporting": false 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_subsystem_add_host", 00:21:16.408 "params": { 00:21:16.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.408 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.408 "psk": "key0" 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_subsystem_add_ns", 00:21:16.408 "params": { 00:21:16.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.408 "namespace": { 00:21:16.408 "nsid": 1, 00:21:16.408 "bdev_name": "malloc0", 00:21:16.408 "nguid": "BF08F40708814BD289D02E394A20ED20", 00:21:16.408 "uuid": "bf08f407-0881-4bd2-89d0-2e394a20ed20", 00:21:16.408 "no_auto_visible": false 00:21:16.408 } 00:21:16.408 } 00:21:16.408 }, 00:21:16.408 { 00:21:16.408 "method": "nvmf_subsystem_add_listener", 00:21:16.408 "params": { 00:21:16.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.408 "listen_address": { 00:21:16.408 "trtype": "TCP", 00:21:16.408 "adrfam": "IPv4", 00:21:16.408 "traddr": "10.0.0.2", 00:21:16.408 "trsvcid": "4420" 00:21:16.408 }, 00:21:16.408 "secure_channel": false, 00:21:16.408 "sock_impl": "ssl" 00:21:16.408 } 00:21:16.408 } 00:21:16.408 ] 00:21:16.408 } 00:21:16.408 ] 00:21:16.408 }' 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1144725 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1144725 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1144725 ']' 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.408 [2024-12-06 19:19:26.885744] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:16.408 [2024-12-06 19:19:26.885829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.408 [2024-12-06 19:19:26.955710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.667 [2024-12-06 19:19:27.008892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.667 [2024-12-06 19:19:27.008945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.667 [2024-12-06 19:19:27.008973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.667 [2024-12-06 19:19:27.008984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.667 [2024-12-06 19:19:27.008993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.667 [2024-12-06 19:19:27.009590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.925 [2024-12-06 19:19:27.252432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.925 [2024-12-06 19:19:27.284470] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.925 [2024-12-06 19:19:27.284733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1144806 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1144806 /var/tmp/bdevperf.sock 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1144806 ']' 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.493 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:17.493 "subsystems": [ 00:21:17.493 { 00:21:17.493 "subsystem": "keyring", 00:21:17.493 "config": [ 00:21:17.493 { 00:21:17.493 "method": "keyring_file_add_key", 00:21:17.493 "params": { 00:21:17.493 "name": "key0", 00:21:17.493 "path": "/tmp/tmp.zx8QaQ82uL" 00:21:17.493 } 00:21:17.493 } 00:21:17.493 ] 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "subsystem": "iobuf", 00:21:17.493 "config": [ 00:21:17.493 { 00:21:17.493 "method": "iobuf_set_options", 00:21:17.493 "params": { 00:21:17.493 "small_pool_count": 8192, 00:21:17.493 "large_pool_count": 1024, 00:21:17.493 "small_bufsize": 8192, 00:21:17.493 "large_bufsize": 135168, 00:21:17.493 "enable_numa": false 00:21:17.493 } 00:21:17.493 } 00:21:17.493 ] 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "subsystem": "sock", 00:21:17.493 "config": [ 00:21:17.493 { 00:21:17.493 "method": "sock_set_default_impl", 00:21:17.493 "params": { 00:21:17.493 "impl_name": "posix" 00:21:17.493 } 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "method": "sock_impl_set_options", 00:21:17.493 "params": { 00:21:17.493 "impl_name": "ssl", 00:21:17.493 "recv_buf_size": 4096, 00:21:17.493 "send_buf_size": 4096, 00:21:17.493 "enable_recv_pipe": true, 00:21:17.493 "enable_quickack": false, 00:21:17.493 "enable_placement_id": 0, 00:21:17.493 "enable_zerocopy_send_server": true, 00:21:17.493 "enable_zerocopy_send_client": false, 00:21:17.493 "zerocopy_threshold": 0, 00:21:17.493 "tls_version": 0, 00:21:17.493 "enable_ktls": false 00:21:17.493 } 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "method": "sock_impl_set_options", 00:21:17.493 "params": { 00:21:17.493 "impl_name": "posix", 00:21:17.493 "recv_buf_size": 2097152, 00:21:17.493 "send_buf_size": 2097152, 00:21:17.493 "enable_recv_pipe": true, 00:21:17.493 "enable_quickack": false, 00:21:17.493 "enable_placement_id": 0, 00:21:17.493 "enable_zerocopy_send_server": true, 00:21:17.493 "enable_zerocopy_send_client": false, 00:21:17.493 "zerocopy_threshold": 0, 00:21:17.493 "tls_version": 0, 00:21:17.493 "enable_ktls": false 00:21:17.493 } 00:21:17.493 } 00:21:17.493 ] 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "subsystem": "vmd", 00:21:17.493 "config": [] 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "subsystem": "accel", 00:21:17.493 "config": [ 00:21:17.493 { 00:21:17.493 "method": "accel_set_options", 00:21:17.493 "params": { 00:21:17.493 "small_cache_size": 128, 00:21:17.493 "large_cache_size": 16, 00:21:17.493 "task_count": 2048, 00:21:17.493 "sequence_count": 2048, 00:21:17.493 "buf_count": 2048 00:21:17.493 } 00:21:17.493 } 00:21:17.493 ] 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "subsystem": "bdev", 00:21:17.493 "config": [ 00:21:17.493 { 00:21:17.493 "method": "bdev_set_options", 00:21:17.493 "params": { 00:21:17.493 "bdev_io_pool_size": 65535, 00:21:17.493 "bdev_io_cache_size": 256, 00:21:17.493 "bdev_auto_examine": true, 00:21:17.493 "iobuf_small_cache_size": 128, 00:21:17.493 "iobuf_large_cache_size": 16 00:21:17.493 } 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "method": "bdev_raid_set_options", 00:21:17.493 "params": { 00:21:17.493 "process_window_size_kb": 1024, 00:21:17.493 "process_max_bandwidth_mb_sec": 0 00:21:17.493 } 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "method": "bdev_iscsi_set_options", 00:21:17.493 "params": { 00:21:17.493 "timeout_sec": 30 00:21:17.493 } 00:21:17.493 }, 00:21:17.493 { 00:21:17.493 "method": "bdev_nvme_set_options", 00:21:17.493 "params": { 00:21:17.493 "action_on_timeout": "none", 00:21:17.493 "timeout_us": 0, 00:21:17.493 "timeout_admin_us": 0, 00:21:17.493 "keep_alive_timeout_ms": 10000, 00:21:17.493 "arbitration_burst": 0, 00:21:17.493 "low_priority_weight": 0, 00:21:17.493 "medium_priority_weight": 0, 00:21:17.493 "high_priority_weight": 0, 00:21:17.493 "nvme_adminq_poll_period_us": 10000, 00:21:17.493 "nvme_ioq_poll_period_us": 0, 00:21:17.494 "io_queue_requests": 512, 00:21:17.494 "delay_cmd_submit": true, 00:21:17.494 "transport_retry_count": 4, 00:21:17.494 "bdev_retry_count": 3, 00:21:17.494 "transport_ack_timeout": 0, 00:21:17.494 "ctrlr_loss_timeout_sec": 0, 00:21:17.494 "reconnect_delay_sec": 0, 00:21:17.494 "fast_io_fail_timeout_sec": 0, 00:21:17.494 "disable_auto_failback": false, 00:21:17.494 "generate_uuids": false, 00:21:17.494 "transport_tos": 0, 00:21:17.494 "nvme_error_stat": false, 00:21:17.494 "rdma_srq_size": 0, 00:21:17.494 "io_path_stat": false, 00:21:17.494 "allow_accel_sequence": false, 00:21:17.494 "rdma_max_cq_size": 0, 00:21:17.494 "rdma_cm_event_timeout_ms": 0, 00:21:17.494 "dhchap_digests": [ 00:21:17.494 "sha256", 00:21:17.494 "sha384", 00:21:17.494 "sha512" 00:21:17.494 ], 00:21:17.494 "dhchap_dhgroups": [ 00:21:17.494 "null", 00:21:17.494 "ffdhe2048", 00:21:17.494 "ffdhe3072", 00:21:17.494 "ffdhe4096", 00:21:17.494 "ffdhe6144", 00:21:17.494 "ffdhe8192" 00:21:17.494 ], 00:21:17.494 "rdma_umr_per_io": false 00:21:17.494 } 00:21:17.494 }, 00:21:17.494 { 00:21:17.494 "method": "bdev_nvme_attach_controller", 00:21:17.494 "params": { 00:21:17.494 "name": "nvme0", 00:21:17.494 "trtype": "TCP", 00:21:17.494 "adrfam": "IPv4", 00:21:17.494 "traddr": "10.0.0.2", 00:21:17.494 "trsvcid": "4420", 00:21:17.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.494 "prchk_reftag": false, 00:21:17.494 "prchk_guard": false, 00:21:17.494 "ctrlr_loss_timeout_sec": 0, 00:21:17.494 "reconnect_delay_sec": 0, 00:21:17.494 "fast_io_fail_timeout_sec": 0, 00:21:17.494 "psk": "key0", 00:21:17.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.494 "hdgst": false, 00:21:17.494 "ddgst": false, 00:21:17.494 "multipath": "multipath" 00:21:17.494 } 00:21:17.494 }, 00:21:17.494 { 00:21:17.494 "method": "bdev_nvme_set_hotplug", 00:21:17.494 "params": { 00:21:17.494 "period_us": 100000, 00:21:17.494 "enable": false 00:21:17.494 } 00:21:17.494 }, 00:21:17.494 { 00:21:17.494 "method": "bdev_enable_histogram", 00:21:17.494 "params": { 00:21:17.494 "name": "nvme0n1", 00:21:17.494 "enable": true 00:21:17.494 } 00:21:17.494 }, 00:21:17.494 { 00:21:17.494 "method": "bdev_wait_for_examine" 00:21:17.494 } 00:21:17.494 ] 00:21:17.494 }, 00:21:17.494 { 00:21:17.494 "subsystem": "nbd", 00:21:17.494 "config": [] 00:21:17.494 } 00:21:17.494 ] 00:21:17.494 }' 00:21:17.494 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.494 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.494 [2024-12-06 19:19:27.925916] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:17.494 [2024-12-06 19:19:27.926032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144806 ] 00:21:17.494 [2024-12-06 19:19:27.996800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.494 [2024-12-06 19:19:28.056316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.753 [2024-12-06 19:19:28.238527] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.011 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.011 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.011 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.011 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:18.270 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.270 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.270 Running I/O for 1 seconds... 00:21:19.204 3393.00 IOPS, 13.25 MiB/s 00:21:19.204 Latency(us) 00:21:19.204 [2024-12-06T18:19:29.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.204 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.204 Verification LBA range: start 0x0 length 0x2000 00:21:19.204 nvme0n1 : 1.02 3458.25 13.51 0.00 0.00 36685.67 6189.51 55924.05 00:21:19.204 [2024-12-06T18:19:29.781Z] =================================================================================================================== 00:21:19.204 [2024-12-06T18:19:29.781Z] Total : 3458.25 13.51 0.00 0.00 36685.67 6189.51 55924.05 00:21:19.204 { 00:21:19.204 "results": [ 00:21:19.204 { 00:21:19.204 "job": "nvme0n1", 00:21:19.204 "core_mask": "0x2", 00:21:19.204 "workload": "verify", 00:21:19.204 "status": "finished", 00:21:19.204 "verify_range": { 00:21:19.204 "start": 0, 00:21:19.204 "length": 8192 00:21:19.204 }, 00:21:19.204 "queue_depth": 128, 00:21:19.204 "io_size": 4096, 00:21:19.204 "runtime": 1.018144, 00:21:19.204 "iops": 3458.2534494138354, 00:21:19.204 "mibps": 13.508802536772794, 00:21:19.204 "io_failed": 0, 00:21:19.204 "io_timeout": 0, 00:21:19.204 "avg_latency_us": 36685.666070455576, 00:21:19.204 "min_latency_us": 6189.511111111111, 00:21:19.204 "max_latency_us": 55924.05333333334 00:21:19.204 } 00:21:19.204 ], 00:21:19.204 "core_count": 1 00:21:19.204 } 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:19.204 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:19.463 nvmf_trace.0 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1144806 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1144806 ']' 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1144806 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1144806 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1144806' 00:21:19.463 killing process with pid 1144806 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1144806 00:21:19.463 Received shutdown signal, test time was about 1.000000 seconds 00:21:19.463 00:21:19.463 Latency(us) 00:21:19.463 [2024-12-06T18:19:30.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.463 [2024-12-06T18:19:30.040Z] =================================================================================================================== 00:21:19.463 [2024-12-06T18:19:30.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.463 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1144806 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.721 rmmod nvme_tcp 00:21:19.721 rmmod nvme_fabrics 00:21:19.721 rmmod nvme_keyring 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1144725 ']' 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1144725 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1144725 ']' 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1144725 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1144725 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1144725' 00:21:19.721 killing process with pid 1144725 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1144725 00:21:19.721 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1144725 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.981 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yKy071oyqv /tmp/tmp.BCT46aRHnl /tmp/tmp.zx8QaQ82uL 00:21:22.527 00:21:22.527 real 1m23.477s 00:21:22.527 user 2m13.964s 00:21:22.527 sys 0m27.372s 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.527 ************************************ 00:21:22.527 END TEST nvmf_tls 00:21:22.527 ************************************ 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.527 ************************************ 00:21:22.527 START TEST nvmf_fips 00:21:22.527 ************************************ 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:22.527 * Looking for test storage... 00:21:22.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.527 --rc genhtml_branch_coverage=1 00:21:22.527 --rc genhtml_function_coverage=1 00:21:22.527 --rc genhtml_legend=1 00:21:22.527 --rc geninfo_all_blocks=1 00:21:22.527 --rc geninfo_unexecuted_blocks=1 00:21:22.527 00:21:22.527 ' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.527 --rc genhtml_branch_coverage=1 00:21:22.527 --rc genhtml_function_coverage=1 00:21:22.527 --rc genhtml_legend=1 00:21:22.527 --rc geninfo_all_blocks=1 00:21:22.527 --rc geninfo_unexecuted_blocks=1 00:21:22.527 00:21:22.527 ' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.527 --rc genhtml_branch_coverage=1 00:21:22.527 --rc genhtml_function_coverage=1 00:21:22.527 --rc genhtml_legend=1 00:21:22.527 --rc geninfo_all_blocks=1 00:21:22.527 --rc geninfo_unexecuted_blocks=1 00:21:22.527 00:21:22.527 ' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.527 --rc genhtml_branch_coverage=1 00:21:22.527 --rc genhtml_function_coverage=1 00:21:22.527 --rc genhtml_legend=1 00:21:22.527 --rc geninfo_all_blocks=1 00:21:22.527 --rc geninfo_unexecuted_blocks=1 00:21:22.527 00:21:22.527 ' 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.527 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:22.528 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:22.529 Error setting digest 00:21:22.529 40A23CAD237F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:22.529 40A23CAD237F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.529 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.435 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:24.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:24.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:24.436 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:24.436 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:21:24.436 00:21:24.436 --- 10.0.0.2 ping statistics --- 00:21:24.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.436 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:24.436 00:21:24.436 --- 10.0.0.1 ping statistics --- 00:21:24.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.436 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.436 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1147157 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1147157 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1147157 ']' 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.695 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.695 [2024-12-06 19:19:35.097259] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:24.695 [2024-12-06 19:19:35.097352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.695 [2024-12-06 19:19:35.169404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.695 [2024-12-06 19:19:35.227244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.695 [2024-12-06 19:19:35.227310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.695 [2024-12-06 19:19:35.227338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.695 [2024-12-06 19:19:35.227349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.695 [2024-12-06 19:19:35.227359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.695 [2024-12-06 19:19:35.228013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.UJV 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.UJV 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.UJV 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.UJV 00:21:24.954 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.212 [2024-12-06 19:19:35.666254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.212 [2024-12-06 19:19:35.682263] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.212 [2024-12-06 19:19:35.682504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.212 malloc0 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1147193 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1147193 /var/tmp/bdevperf.sock 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1147193 ']' 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.212 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:25.470 [2024-12-06 19:19:35.814691] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:25.470 [2024-12-06 19:19:35.814773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147193 ] 00:21:25.470 [2024-12-06 19:19:35.880557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.470 [2024-12-06 19:19:35.944166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.727 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.727 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:25.727 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.UJV 00:21:25.984 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:26.242 [2024-12-06 19:19:36.562160] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.242 TLSTESTn1 00:21:26.242 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.242 Running I/O for 10 seconds... 00:21:28.547 2887.00 IOPS, 11.28 MiB/s [2024-12-06T18:19:40.057Z] 2900.50 IOPS, 11.33 MiB/s [2024-12-06T18:19:40.991Z] 2956.33 IOPS, 11.55 MiB/s [2024-12-06T18:19:41.925Z] 2990.00 IOPS, 11.68 MiB/s [2024-12-06T18:19:42.860Z] 3006.60 IOPS, 11.74 MiB/s [2024-12-06T18:19:43.792Z] 3006.33 IOPS, 11.74 MiB/s [2024-12-06T18:19:45.167Z] 3007.43 IOPS, 11.75 MiB/s [2024-12-06T18:19:46.096Z] 3000.50 IOPS, 11.72 MiB/s [2024-12-06T18:19:47.028Z] 2997.89 IOPS, 11.71 MiB/s [2024-12-06T18:19:47.028Z] 2988.40 IOPS, 11.67 MiB/s 00:21:36.451 Latency(us) 00:21:36.451 [2024-12-06T18:19:47.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.451 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:36.451 Verification LBA range: start 0x0 length 0x2000 00:21:36.451 TLSTESTn1 : 10.02 2994.44 11.70 0.00 0.00 42674.33 9320.68 71070.15 00:21:36.451 [2024-12-06T18:19:47.028Z] =================================================================================================================== 00:21:36.451 [2024-12-06T18:19:47.028Z] Total : 2994.44 11.70 0.00 0.00 42674.33 9320.68 71070.15 00:21:36.451 { 00:21:36.451 "results": [ 00:21:36.451 { 00:21:36.451 "job": "TLSTESTn1", 00:21:36.451 "core_mask": "0x4", 00:21:36.451 "workload": "verify", 00:21:36.451 "status": "finished", 00:21:36.451 "verify_range": { 00:21:36.451 "start": 0, 00:21:36.451 "length": 8192 00:21:36.451 }, 00:21:36.451 "queue_depth": 128, 00:21:36.451 "io_size": 4096, 00:21:36.451 "runtime": 10.021892, 00:21:36.451 "iops": 2994.4445619649464, 00:21:36.451 "mibps": 11.697049070175572, 00:21:36.451 "io_failed": 0, 00:21:36.451 "io_timeout": 0, 00:21:36.451 "avg_latency_us": 42674.32686961137, 00:21:36.451 "min_latency_us": 9320.675555555556, 00:21:36.451 "max_latency_us": 71070.15111111112 00:21:36.451 } 00:21:36.451 ], 00:21:36.451 "core_count": 1 00:21:36.451 } 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:36.451 nvmf_trace.0 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1147193 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1147193 ']' 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1147193 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147193 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147193' 00:21:36.451 killing process with pid 1147193 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1147193 00:21:36.451 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.451 00:21:36.451 Latency(us) 00:21:36.451 [2024-12-06T18:19:47.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.451 [2024-12-06T18:19:47.028Z] =================================================================================================================== 00:21:36.451 [2024-12-06T18:19:47.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.451 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1147193 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.708 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:36.709 rmmod nvme_tcp 00:21:36.709 rmmod nvme_fabrics 00:21:36.709 rmmod nvme_keyring 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1147157 ']' 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1147157 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1147157 ']' 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1147157 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147157 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147157' 00:21:36.709 killing process with pid 1147157 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1147157 00:21:36.709 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1147157 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.967 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.UJV 00:21:39.500 00:21:39.500 real 0m16.979s 00:21:39.500 user 0m18.707s 00:21:39.500 sys 0m6.998s 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.500 ************************************ 00:21:39.500 END TEST nvmf_fips 00:21:39.500 ************************************ 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.500 ************************************ 00:21:39.500 START TEST nvmf_control_msg_list 00:21:39.500 ************************************ 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:39.500 * Looking for test storage... 00:21:39.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.500 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.501 --rc genhtml_branch_coverage=1 00:21:39.501 --rc genhtml_function_coverage=1 00:21:39.501 --rc genhtml_legend=1 00:21:39.501 --rc geninfo_all_blocks=1 00:21:39.501 --rc geninfo_unexecuted_blocks=1 00:21:39.501 00:21:39.501 ' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.501 --rc genhtml_branch_coverage=1 00:21:39.501 --rc genhtml_function_coverage=1 00:21:39.501 --rc genhtml_legend=1 00:21:39.501 --rc geninfo_all_blocks=1 00:21:39.501 --rc geninfo_unexecuted_blocks=1 00:21:39.501 00:21:39.501 ' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.501 --rc genhtml_branch_coverage=1 00:21:39.501 --rc genhtml_function_coverage=1 00:21:39.501 --rc genhtml_legend=1 00:21:39.501 --rc geninfo_all_blocks=1 00:21:39.501 --rc geninfo_unexecuted_blocks=1 00:21:39.501 00:21:39.501 ' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.501 --rc genhtml_branch_coverage=1 00:21:39.501 --rc genhtml_function_coverage=1 00:21:39.501 --rc genhtml_legend=1 00:21:39.501 --rc geninfo_all_blocks=1 00:21:39.501 --rc geninfo_unexecuted_blocks=1 00:21:39.501 00:21:39.501 ' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.501 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:41.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:41.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:41.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:41.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.406 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:21:41.407 00:21:41.407 --- 10.0.0.2 ping statistics --- 00:21:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.407 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:21:41.407 00:21:41.407 --- 10.0.0.1 ping statistics --- 00:21:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.407 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1150570 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1150570 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1150570 ']' 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.407 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.665 [2024-12-06 19:19:52.030135] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:41.665 [2024-12-06 19:19:52.030236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.665 [2024-12-06 19:19:52.102941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.665 [2024-12-06 19:19:52.161404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.665 [2024-12-06 19:19:52.161469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.665 [2024-12-06 19:19:52.161498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.665 [2024-12-06 19:19:52.161510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.665 [2024-12-06 19:19:52.161520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.665 [2024-12-06 19:19:52.162208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 [2024-12-06 19:19:52.313468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:41.924 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.925 Malloc0 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.925 [2024-12-06 19:19:52.353535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1150590 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1150591 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1150592 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1150590 00:21:41.925 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.925 [2024-12-06 19:19:52.412081] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:41.925 [2024-12-06 19:19:52.422517] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:41.925 [2024-12-06 19:19:52.423023] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:42.860 Initializing NVMe Controllers 00:21:42.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:42.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:42.860 Initialization complete. Launching workers. 00:21:42.860 ======================================================== 00:21:42.860 Latency(us) 00:21:42.860 Device Information : IOPS MiB/s Average min max 00:21:42.860 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4346.00 16.98 229.72 169.09 570.23 00:21:42.860 ======================================================== 00:21:42.860 Total : 4346.00 16.98 229.72 169.09 570.23 00:21:42.860 00:21:43.118 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1150591 00:21:43.118 Initializing NVMe Controllers 00:21:43.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:43.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:43.118 Initialization complete. Launching workers. 00:21:43.118 ======================================================== 00:21:43.118 Latency(us) 00:21:43.118 Device Information : IOPS MiB/s Average min max 00:21:43.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40930.58 40616.83 41895.01 00:21:43.118 ======================================================== 00:21:43.118 Total : 25.00 0.10 40930.58 40616.83 41895.01 00:21:43.118 00:21:43.118 Initializing NVMe Controllers 00:21:43.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:43.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:43.118 Initialization complete. Launching workers. 00:21:43.118 ======================================================== 00:21:43.118 Latency(us) 00:21:43.118 Device Information : IOPS MiB/s Average min max 00:21:43.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4374.97 17.09 228.22 152.44 538.60 00:21:43.118 ======================================================== 00:21:43.118 Total : 4374.97 17.09 228.22 152.44 538.60 00:21:43.118 00:21:43.118 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1150592 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.119 rmmod nvme_tcp 00:21:43.119 rmmod nvme_fabrics 00:21:43.119 rmmod nvme_keyring 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1150570 ']' 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1150570 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1150570 ']' 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1150570 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150570 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150570' 00:21:43.119 killing process with pid 1150570 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1150570 00:21:43.119 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1150570 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.378 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.918 00:21:45.918 real 0m6.410s 00:21:45.918 user 0m5.267s 00:21:45.918 sys 0m2.764s 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:45.918 ************************************ 00:21:45.918 END TEST nvmf_control_msg_list 00:21:45.918 ************************************ 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.918 19:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.918 ************************************ 00:21:45.918 START TEST nvmf_wait_for_buf 00:21:45.918 ************************************ 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:45.918 * Looking for test storage... 00:21:45.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.918 --rc genhtml_branch_coverage=1 00:21:45.918 --rc genhtml_function_coverage=1 00:21:45.918 --rc genhtml_legend=1 00:21:45.918 --rc geninfo_all_blocks=1 00:21:45.918 --rc geninfo_unexecuted_blocks=1 00:21:45.918 00:21:45.918 ' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.918 --rc genhtml_branch_coverage=1 00:21:45.918 --rc genhtml_function_coverage=1 00:21:45.918 --rc genhtml_legend=1 00:21:45.918 --rc geninfo_all_blocks=1 00:21:45.918 --rc geninfo_unexecuted_blocks=1 00:21:45.918 00:21:45.918 ' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.918 --rc genhtml_branch_coverage=1 00:21:45.918 --rc genhtml_function_coverage=1 00:21:45.918 --rc genhtml_legend=1 00:21:45.918 --rc geninfo_all_blocks=1 00:21:45.918 --rc geninfo_unexecuted_blocks=1 00:21:45.918 00:21:45.918 ' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.918 --rc genhtml_branch_coverage=1 00:21:45.918 --rc genhtml_function_coverage=1 00:21:45.918 --rc genhtml_legend=1 00:21:45.918 --rc geninfo_all_blocks=1 00:21:45.918 --rc geninfo_unexecuted_blocks=1 00:21:45.918 00:21:45.918 ' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.918 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.919 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.823 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.082 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:48.082 00:21:48.082 --- 10.0.0.2 ping statistics --- 00:21:48.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.083 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:48.083 00:21:48.083 --- 10.0.0.1 ping statistics --- 00:21:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.083 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1152670 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1152670 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1152670 ']' 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.083 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.083 [2024-12-06 19:19:58.489851] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:48.083 [2024-12-06 19:19:58.489938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.083 [2024-12-06 19:19:58.570719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.083 [2024-12-06 19:19:58.628873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.083 [2024-12-06 19:19:58.628931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.083 [2024-12-06 19:19:58.628959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.083 [2024-12-06 19:19:58.628970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.083 [2024-12-06 19:19:58.628980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.083 [2024-12-06 19:19:58.629610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 Malloc0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 [2024-12-06 19:19:58.861093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 [2024-12-06 19:19:58.885316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.342 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:48.601 [2024-12-06 19:19:58.974826] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:50.125 Initializing NVMe Controllers 00:21:50.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:50.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:50.126 Initialization complete. Launching workers. 00:21:50.126 ======================================================== 00:21:50.126 Latency(us) 00:21:50.126 Device Information : IOPS MiB/s Average min max 00:21:50.126 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.88 34972.97 8006.57 71824.46 00:21:50.126 ======================================================== 00:21:50.126 Total : 119.00 14.88 34972.97 8006.57 71824.46 00:21:50.126 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.126 rmmod nvme_tcp 00:21:50.126 rmmod nvme_fabrics 00:21:50.126 rmmod nvme_keyring 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1152670 ']' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1152670 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1152670 ']' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1152670 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152670 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152670' 00:21:50.126 killing process with pid 1152670 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1152670 00:21:50.126 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1152670 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.385 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.916 00:21:52.916 real 0m6.883s 00:21:52.916 user 0m3.316s 00:21:52.916 sys 0m2.041s 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:52.916 ************************************ 00:21:52.916 END TEST nvmf_wait_for_buf 00:21:52.916 ************************************ 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.916 19:20:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:54.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:54.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:54.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:54.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.819 ************************************ 00:21:54.819 START TEST nvmf_perf_adq 00:21:54.819 ************************************ 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:54.819 * Looking for test storage... 00:21:54.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:54.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.819 --rc genhtml_branch_coverage=1 00:21:54.819 --rc genhtml_function_coverage=1 00:21:54.819 --rc genhtml_legend=1 00:21:54.819 --rc geninfo_all_blocks=1 00:21:54.819 --rc geninfo_unexecuted_blocks=1 00:21:54.819 00:21:54.819 ' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:54.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.819 --rc genhtml_branch_coverage=1 00:21:54.819 --rc genhtml_function_coverage=1 00:21:54.819 --rc genhtml_legend=1 00:21:54.819 --rc geninfo_all_blocks=1 00:21:54.819 --rc geninfo_unexecuted_blocks=1 00:21:54.819 00:21:54.819 ' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:54.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.819 --rc genhtml_branch_coverage=1 00:21:54.819 --rc genhtml_function_coverage=1 00:21:54.819 --rc genhtml_legend=1 00:21:54.819 --rc geninfo_all_blocks=1 00:21:54.819 --rc geninfo_unexecuted_blocks=1 00:21:54.819 00:21:54.819 ' 00:21:54.819 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:54.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.819 --rc genhtml_branch_coverage=1 00:21:54.819 --rc genhtml_function_coverage=1 00:21:54.819 --rc genhtml_legend=1 00:21:54.820 --rc geninfo_all_blocks=1 00:21:54.820 --rc geninfo_unexecuted_blocks=1 00:21:54.820 00:21:54.820 ' 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.820 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.719 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.719 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.719 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.719 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.720 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.978 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.978 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.978 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.978 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:56.979 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:57.544 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:00.075 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:05.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:05.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:05.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.348 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:05.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:22:05.349 00:22:05.349 --- 10.0.0.2 ping statistics --- 00:22:05.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.349 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:05.349 00:22:05.349 --- 10.0.0.1 ping statistics --- 00:22:05.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.349 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1157522 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1157522 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1157522 ']' 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.349 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.349 [2024-12-06 19:20:15.714660] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:05.349 [2024-12-06 19:20:15.714753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.349 [2024-12-06 19:20:15.789410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.349 [2024-12-06 19:20:15.850654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.349 [2024-12-06 19:20:15.850747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.349 [2024-12-06 19:20:15.850762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.349 [2024-12-06 19:20:15.850774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.349 [2024-12-06 19:20:15.850784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.349 [2024-12-06 19:20:15.852555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.349 [2024-12-06 19:20:15.852620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.349 [2024-12-06 19:20:15.852753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.349 [2024-12-06 19:20:15.852758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.607 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.607 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 [2024-12-06 19:20:16.140692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.608 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.608 Malloc1 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.866 [2024-12-06 19:20:16.204105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1157675 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:05.866 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:07.768 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:07.768 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.768 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.768 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.768 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:07.768 "tick_rate": 2700000000, 00:22:07.768 "poll_groups": [ 00:22:07.768 { 00:22:07.768 "name": "nvmf_tgt_poll_group_000", 00:22:07.768 "admin_qpairs": 1, 00:22:07.768 "io_qpairs": 1, 00:22:07.768 "current_admin_qpairs": 1, 00:22:07.768 "current_io_qpairs": 1, 00:22:07.768 "pending_bdev_io": 0, 00:22:07.768 "completed_nvme_io": 18438, 00:22:07.768 "transports": [ 00:22:07.768 { 00:22:07.768 "trtype": "TCP" 00:22:07.768 } 00:22:07.768 ] 00:22:07.768 }, 00:22:07.768 { 00:22:07.768 "name": "nvmf_tgt_poll_group_001", 00:22:07.768 "admin_qpairs": 0, 00:22:07.768 "io_qpairs": 1, 00:22:07.768 "current_admin_qpairs": 0, 00:22:07.768 "current_io_qpairs": 1, 00:22:07.768 "pending_bdev_io": 0, 00:22:07.768 "completed_nvme_io": 18537, 00:22:07.768 "transports": [ 00:22:07.768 { 00:22:07.768 "trtype": "TCP" 00:22:07.768 } 00:22:07.768 ] 00:22:07.768 }, 00:22:07.768 { 00:22:07.768 "name": "nvmf_tgt_poll_group_002", 00:22:07.768 "admin_qpairs": 0, 00:22:07.768 "io_qpairs": 1, 00:22:07.768 "current_admin_qpairs": 0, 00:22:07.768 "current_io_qpairs": 1, 00:22:07.768 "pending_bdev_io": 0, 00:22:07.768 "completed_nvme_io": 18483, 00:22:07.768 "transports": [ 00:22:07.768 { 00:22:07.768 "trtype": "TCP" 00:22:07.769 } 00:22:07.769 ] 00:22:07.769 }, 00:22:07.769 { 00:22:07.769 "name": "nvmf_tgt_poll_group_003", 00:22:07.769 "admin_qpairs": 0, 00:22:07.769 "io_qpairs": 1, 00:22:07.769 "current_admin_qpairs": 0, 00:22:07.769 "current_io_qpairs": 1, 00:22:07.769 "pending_bdev_io": 0, 00:22:07.769 "completed_nvme_io": 18452, 00:22:07.769 "transports": [ 00:22:07.769 { 00:22:07.769 "trtype": "TCP" 00:22:07.769 } 00:22:07.769 ] 00:22:07.769 } 00:22:07.769 ] 00:22:07.769 }' 00:22:07.769 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:07.769 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:07.769 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:07.769 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:07.769 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1157675 00:22:17.739 Initializing NVMe Controllers 00:22:17.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:17.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:17.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:17.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:17.739 Initialization complete. Launching workers. 00:22:17.739 ======================================================== 00:22:17.739 Latency(us) 00:22:17.739 Device Information : IOPS MiB/s Average min max 00:22:17.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10306.90 40.26 6210.97 1914.91 10129.51 00:22:17.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10323.50 40.33 6201.80 2169.54 10166.83 00:22:17.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10388.90 40.58 6160.75 2073.74 10085.01 00:22:17.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10321.70 40.32 6201.65 2322.14 10912.90 00:22:17.739 ======================================================== 00:22:17.739 Total : 41341.00 161.49 6193.73 1914.91 10912.90 00:22:17.739 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.739 rmmod nvme_tcp 00:22:17.739 rmmod nvme_fabrics 00:22:17.739 rmmod nvme_keyring 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1157522 ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1157522 ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1157522' 00:22:17.739 killing process with pid 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1157522 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.739 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.740 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.740 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.740 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.740 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.740 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.678 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.678 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:18.678 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:18.678 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:19.248 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:21.782 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:27.059 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:27.059 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.059 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.059 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:27.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:27.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:27.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:27.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.060 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:22:27.061 00:22:27.061 --- 10.0.0.2 ping statistics --- 00:22:27.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.061 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:22:27.061 00:22:27.061 --- 10.0.0.1 ping statistics --- 00:22:27.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.061 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:27.061 net.core.busy_poll = 1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:27.061 net.core.busy_read = 1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1160407 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1160407 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1160407 ']' 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.061 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.061 [2024-12-06 19:20:37.464729] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:27.061 [2024-12-06 19:20:37.464826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.061 [2024-12-06 19:20:37.534901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.061 [2024-12-06 19:20:37.592907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.061 [2024-12-06 19:20:37.592976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.061 [2024-12-06 19:20:37.592990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.061 [2024-12-06 19:20:37.593000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.061 [2024-12-06 19:20:37.593024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.061 [2024-12-06 19:20:37.594501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.062 [2024-12-06 19:20:37.594564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.062 [2024-12-06 19:20:37.594632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.062 [2024-12-06 19:20:37.594636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.320 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.320 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:27.320 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.320 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.320 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 [2024-12-06 19:20:37.866767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.579 Malloc1 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.579 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.580 [2024-12-06 19:20:37.925763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1160452 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.580 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:29.479 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:29.479 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.479 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.479 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.479 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:29.479 "tick_rate": 2700000000, 00:22:29.479 "poll_groups": [ 00:22:29.479 { 00:22:29.479 "name": "nvmf_tgt_poll_group_000", 00:22:29.479 "admin_qpairs": 1, 00:22:29.479 "io_qpairs": 3, 00:22:29.479 "current_admin_qpairs": 1, 00:22:29.479 "current_io_qpairs": 3, 00:22:29.479 "pending_bdev_io": 0, 00:22:29.479 "completed_nvme_io": 25627, 00:22:29.479 "transports": [ 00:22:29.479 { 00:22:29.479 "trtype": "TCP" 00:22:29.479 } 00:22:29.479 ] 00:22:29.479 }, 00:22:29.479 { 00:22:29.479 "name": "nvmf_tgt_poll_group_001", 00:22:29.479 "admin_qpairs": 0, 00:22:29.479 "io_qpairs": 1, 00:22:29.479 "current_admin_qpairs": 0, 00:22:29.479 "current_io_qpairs": 1, 00:22:29.479 "pending_bdev_io": 0, 00:22:29.479 "completed_nvme_io": 26065, 00:22:29.479 "transports": [ 00:22:29.479 { 00:22:29.479 "trtype": "TCP" 00:22:29.479 } 00:22:29.479 ] 00:22:29.479 }, 00:22:29.479 { 00:22:29.479 "name": "nvmf_tgt_poll_group_002", 00:22:29.479 "admin_qpairs": 0, 00:22:29.479 "io_qpairs": 0, 00:22:29.479 "current_admin_qpairs": 0, 00:22:29.479 "current_io_qpairs": 0, 00:22:29.479 "pending_bdev_io": 0, 00:22:29.479 "completed_nvme_io": 0, 00:22:29.479 "transports": [ 00:22:29.479 { 00:22:29.479 "trtype": "TCP" 00:22:29.479 } 00:22:29.479 ] 00:22:29.479 }, 00:22:29.479 { 00:22:29.479 "name": "nvmf_tgt_poll_group_003", 00:22:29.479 "admin_qpairs": 0, 00:22:29.479 "io_qpairs": 0, 00:22:29.479 "current_admin_qpairs": 0, 00:22:29.479 "current_io_qpairs": 0, 00:22:29.479 "pending_bdev_io": 0, 00:22:29.479 "completed_nvme_io": 0, 00:22:29.479 "transports": [ 00:22:29.479 { 00:22:29.479 "trtype": "TCP" 00:22:29.479 } 00:22:29.479 ] 00:22:29.479 } 00:22:29.479 ] 00:22:29.479 }' 00:22:29.480 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:29.480 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:29.480 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:29.480 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:29.480 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1160452 00:22:37.651 Initializing NVMe Controllers 00:22:37.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:37.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:37.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:37.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:37.651 Initialization complete. Launching workers. 00:22:37.651 ======================================================== 00:22:37.651 Latency(us) 00:22:37.651 Device Information : IOPS MiB/s Average min max 00:22:37.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4517.70 17.65 14167.78 1976.61 61465.72 00:22:37.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4510.70 17.62 14230.32 2852.79 61774.94 00:22:37.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4486.10 17.52 14311.74 1882.43 62724.48 00:22:37.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13318.40 52.02 4805.91 1608.78 45360.24 00:22:37.651 ======================================================== 00:22:37.651 Total : 26832.89 104.82 9555.64 1608.78 62724.48 00:22:37.651 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.651 rmmod nvme_tcp 00:22:37.651 rmmod nvme_fabrics 00:22:37.651 rmmod nvme_keyring 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1160407 ']' 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1160407 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1160407 ']' 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1160407 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160407 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160407' 00:22:37.651 killing process with pid 1160407 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1160407 00:22:37.651 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1160407 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.911 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:41.203 00:22:41.203 real 0m46.374s 00:22:41.203 user 2m41.012s 00:22:41.203 sys 0m9.311s 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.203 ************************************ 00:22:41.203 END TEST nvmf_perf_adq 00:22:41.203 ************************************ 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.203 ************************************ 00:22:41.203 START TEST nvmf_shutdown 00:22:41.203 ************************************ 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:41.203 * Looking for test storage... 00:22:41.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.203 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.204 --rc genhtml_branch_coverage=1 00:22:41.204 --rc genhtml_function_coverage=1 00:22:41.204 --rc genhtml_legend=1 00:22:41.204 --rc geninfo_all_blocks=1 00:22:41.204 --rc geninfo_unexecuted_blocks=1 00:22:41.204 00:22:41.204 ' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.204 --rc genhtml_branch_coverage=1 00:22:41.204 --rc genhtml_function_coverage=1 00:22:41.204 --rc genhtml_legend=1 00:22:41.204 --rc geninfo_all_blocks=1 00:22:41.204 --rc geninfo_unexecuted_blocks=1 00:22:41.204 00:22:41.204 ' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.204 --rc genhtml_branch_coverage=1 00:22:41.204 --rc genhtml_function_coverage=1 00:22:41.204 --rc genhtml_legend=1 00:22:41.204 --rc geninfo_all_blocks=1 00:22:41.204 --rc geninfo_unexecuted_blocks=1 00:22:41.204 00:22:41.204 ' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.204 --rc genhtml_branch_coverage=1 00:22:41.204 --rc genhtml_function_coverage=1 00:22:41.204 --rc genhtml_legend=1 00:22:41.204 --rc geninfo_all_blocks=1 00:22:41.204 --rc geninfo_unexecuted_blocks=1 00:22:41.204 00:22:41.204 ' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.204 ************************************ 00:22:41.204 START TEST nvmf_shutdown_tc1 00:22:41.204 ************************************ 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.204 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.737 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.738 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:43.738 00:22:43.738 --- 10.0.0.2 ping statistics --- 00:22:43.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.738 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:22:43.738 00:22:43.738 --- 10.0.0.1 ping statistics --- 00:22:43.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.738 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.738 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1163755 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1163755 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1163755 ']' 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.739 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.739 [2024-12-06 19:20:54.107600] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:43.739 [2024-12-06 19:20:54.107727] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.739 [2024-12-06 19:20:54.182054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.739 [2024-12-06 19:20:54.243230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.739 [2024-12-06 19:20:54.243289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.739 [2024-12-06 19:20:54.243320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.739 [2024-12-06 19:20:54.243331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.739 [2024-12-06 19:20:54.243341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.739 [2024-12-06 19:20:54.244946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.739 [2024-12-06 19:20:54.245008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.739 [2024-12-06 19:20:54.245072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.739 [2024-12-06 19:20:54.245075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.997 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.997 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:43.997 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.997 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.997 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.998 [2024-12-06 19:20:54.400528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.998 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.998 Malloc1 00:22:43.998 [2024-12-06 19:20:54.501053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.998 Malloc2 00:22:44.257 Malloc3 00:22:44.257 Malloc4 00:22:44.257 Malloc5 00:22:44.257 Malloc6 00:22:44.257 Malloc7 00:22:44.257 Malloc8 00:22:44.515 Malloc9 00:22:44.515 Malloc10 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1163936 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1163936 /var/tmp/bdevperf.sock 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1163936 ']' 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:44.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.515 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.515 { 00:22:44.515 "params": { 00:22:44.515 "name": "Nvme$subsystem", 00:22:44.515 "trtype": "$TEST_TRANSPORT", 00:22:44.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.515 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.516 { 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme$subsystem", 00:22:44.516 "trtype": "$TEST_TRANSPORT", 00:22:44.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "$NVMF_PORT", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.516 "hdgst": ${hdgst:-false}, 00:22:44.516 "ddgst": ${ddgst:-false} 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 } 00:22:44.516 EOF 00:22:44.516 )") 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.516 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme1", 00:22:44.516 "trtype": "tcp", 00:22:44.516 "traddr": "10.0.0.2", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "4420", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.516 "hdgst": false, 00:22:44.516 "ddgst": false 00:22:44.516 }, 00:22:44.516 "method": "bdev_nvme_attach_controller" 00:22:44.516 },{ 00:22:44.516 "params": { 00:22:44.516 "name": "Nvme2", 00:22:44.516 "trtype": "tcp", 00:22:44.516 "traddr": "10.0.0.2", 00:22:44.516 "adrfam": "ipv4", 00:22:44.516 "trsvcid": "4420", 00:22:44.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.516 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.516 "hdgst": false, 00:22:44.516 "ddgst": false 00:22:44.516 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme3", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme4", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme5", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme6", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme7", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme8", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme9", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 },{ 00:22:44.517 "params": { 00:22:44.517 "name": "Nvme10", 00:22:44.517 "trtype": "tcp", 00:22:44.517 "traddr": "10.0.0.2", 00:22:44.517 "adrfam": "ipv4", 00:22:44.517 "trsvcid": "4420", 00:22:44.517 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.517 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.517 "hdgst": false, 00:22:44.517 "ddgst": false 00:22:44.517 }, 00:22:44.517 "method": "bdev_nvme_attach_controller" 00:22:44.517 }' 00:22:44.517 [2024-12-06 19:20:54.998017] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:44.517 [2024-12-06 19:20:54.998128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:44.517 [2024-12-06 19:20:55.069975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.786 [2024-12-06 19:20:55.130340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.686 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.686 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:46.686 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:46.686 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.686 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.686 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.686 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1163936 00:22:46.686 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:46.686 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:47.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1163936 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1163755 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.620 { 00:22:47.620 "params": { 00:22:47.620 "name": "Nvme$subsystem", 00:22:47.620 "trtype": "$TEST_TRANSPORT", 00:22:47.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.620 "adrfam": "ipv4", 00:22:47.620 "trsvcid": "$NVMF_PORT", 00:22:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.620 "hdgst": ${hdgst:-false}, 00:22:47.620 "ddgst": ${ddgst:-false} 00:22:47.620 }, 00:22:47.620 "method": "bdev_nvme_attach_controller" 00:22:47.620 } 00:22:47.620 EOF 00:22:47.620 )") 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.620 { 00:22:47.620 "params": { 00:22:47.620 "name": "Nvme$subsystem", 00:22:47.620 "trtype": "$TEST_TRANSPORT", 00:22:47.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.620 "adrfam": "ipv4", 00:22:47.620 "trsvcid": "$NVMF_PORT", 00:22:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.620 "hdgst": ${hdgst:-false}, 00:22:47.620 "ddgst": ${ddgst:-false} 00:22:47.620 }, 00:22:47.620 "method": "bdev_nvme_attach_controller" 00:22:47.620 } 00:22:47.620 EOF 00:22:47.620 )") 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.620 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.620 { 00:22:47.620 "params": { 00:22:47.620 "name": "Nvme$subsystem", 00:22:47.620 "trtype": "$TEST_TRANSPORT", 00:22:47.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.620 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:47.621 { 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme$subsystem", 00:22:47.621 "trtype": "$TEST_TRANSPORT", 00:22:47.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "$NVMF_PORT", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.621 "hdgst": ${hdgst:-false}, 00:22:47.621 "ddgst": ${ddgst:-false} 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 } 00:22:47.621 EOF 00:22:47.621 )") 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:47.621 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme1", 00:22:47.621 "trtype": "tcp", 00:22:47.621 "traddr": "10.0.0.2", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "4420", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.621 "hdgst": false, 00:22:47.621 "ddgst": false 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 },{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme2", 00:22:47.621 "trtype": "tcp", 00:22:47.621 "traddr": "10.0.0.2", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "4420", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:47.621 "hdgst": false, 00:22:47.621 "ddgst": false 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 },{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme3", 00:22:47.621 "trtype": "tcp", 00:22:47.621 "traddr": "10.0.0.2", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "4420", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:47.621 "hdgst": false, 00:22:47.621 "ddgst": false 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 },{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme4", 00:22:47.621 "trtype": "tcp", 00:22:47.621 "traddr": "10.0.0.2", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "4420", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:47.621 "hdgst": false, 00:22:47.621 "ddgst": false 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 },{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme5", 00:22:47.621 "trtype": "tcp", 00:22:47.621 "traddr": "10.0.0.2", 00:22:47.621 "adrfam": "ipv4", 00:22:47.621 "trsvcid": "4420", 00:22:47.621 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:47.621 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:47.621 "hdgst": false, 00:22:47.621 "ddgst": false 00:22:47.621 }, 00:22:47.621 "method": "bdev_nvme_attach_controller" 00:22:47.621 },{ 00:22:47.621 "params": { 00:22:47.621 "name": "Nvme6", 00:22:47.622 "trtype": "tcp", 00:22:47.622 "traddr": "10.0.0.2", 00:22:47.622 "adrfam": "ipv4", 00:22:47.622 "trsvcid": "4420", 00:22:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:47.622 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:47.622 "hdgst": false, 00:22:47.622 "ddgst": false 00:22:47.622 }, 00:22:47.622 "method": "bdev_nvme_attach_controller" 00:22:47.622 },{ 00:22:47.622 "params": { 00:22:47.622 "name": "Nvme7", 00:22:47.622 "trtype": "tcp", 00:22:47.622 "traddr": "10.0.0.2", 00:22:47.622 "adrfam": "ipv4", 00:22:47.622 "trsvcid": "4420", 00:22:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:47.622 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:47.622 "hdgst": false, 00:22:47.622 "ddgst": false 00:22:47.622 }, 00:22:47.622 "method": "bdev_nvme_attach_controller" 00:22:47.622 },{ 00:22:47.622 "params": { 00:22:47.622 "name": "Nvme8", 00:22:47.622 "trtype": "tcp", 00:22:47.622 "traddr": "10.0.0.2", 00:22:47.622 "adrfam": "ipv4", 00:22:47.622 "trsvcid": "4420", 00:22:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:47.622 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:47.622 "hdgst": false, 00:22:47.622 "ddgst": false 00:22:47.622 }, 00:22:47.622 "method": "bdev_nvme_attach_controller" 00:22:47.622 },{ 00:22:47.622 "params": { 00:22:47.622 "name": "Nvme9", 00:22:47.622 "trtype": "tcp", 00:22:47.622 "traddr": "10.0.0.2", 00:22:47.622 "adrfam": "ipv4", 00:22:47.622 "trsvcid": "4420", 00:22:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:47.622 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:47.622 "hdgst": false, 00:22:47.622 "ddgst": false 00:22:47.622 }, 00:22:47.622 "method": "bdev_nvme_attach_controller" 00:22:47.622 },{ 00:22:47.622 "params": { 00:22:47.622 "name": "Nvme10", 00:22:47.622 "trtype": "tcp", 00:22:47.622 "traddr": "10.0.0.2", 00:22:47.622 "adrfam": "ipv4", 00:22:47.622 "trsvcid": "4420", 00:22:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:47.622 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:47.622 "hdgst": false, 00:22:47.622 "ddgst": false 00:22:47.622 }, 00:22:47.622 "method": "bdev_nvme_attach_controller" 00:22:47.622 }' 00:22:47.622 [2024-12-06 19:20:58.058382] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:47.622 [2024-12-06 19:20:58.058457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164353 ] 00:22:47.622 [2024-12-06 19:20:58.130567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.622 [2024-12-06 19:20:58.190482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.995 Running I/O for 1 seconds... 00:22:50.190 1801.00 IOPS, 112.56 MiB/s 00:22:50.190 Latency(us) 00:22:50.190 [2024-12-06T18:21:00.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.190 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme1n1 : 1.15 223.50 13.97 0.00 0.00 281691.40 20583.16 271853.04 00:22:50.190 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme2n1 : 1.12 232.18 14.51 0.00 0.00 267171.23 8009.96 256318.58 00:22:50.190 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme3n1 : 1.11 229.92 14.37 0.00 0.00 266184.82 17961.72 260978.92 00:22:50.190 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme4n1 : 1.18 271.49 16.97 0.00 0.00 222299.02 18835.53 253211.69 00:22:50.190 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme5n1 : 1.15 222.02 13.88 0.00 0.00 265622.57 22039.51 274959.93 00:22:50.190 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme6n1 : 1.13 225.90 14.12 0.00 0.00 257599.72 21359.88 281173.71 00:22:50.190 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme7n1 : 1.14 225.38 14.09 0.00 0.00 253720.27 19320.98 245444.46 00:22:50.190 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme8n1 : 1.14 223.96 14.00 0.00 0.00 251125.57 16408.27 246997.90 00:22:50.190 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme9n1 : 1.18 216.09 13.51 0.00 0.00 257174.57 22136.60 279620.27 00:22:50.190 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.190 Verification LBA range: start 0x0 length 0x400 00:22:50.190 Nvme10n1 : 1.19 268.26 16.77 0.00 0.00 203761.66 6941.96 253211.69 00:22:50.190 [2024-12-06T18:21:00.767Z] =================================================================================================================== 00:22:50.190 [2024-12-06T18:21:00.767Z] Total : 2338.70 146.17 0.00 0.00 250773.54 6941.96 281173.71 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.448 rmmod nvme_tcp 00:22:50.448 rmmod nvme_fabrics 00:22:50.448 rmmod nvme_keyring 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:50.448 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1163755 ']' 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1163755 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1163755 ']' 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1163755 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1163755 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1163755' 00:22:50.449 killing process with pid 1163755 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1163755 00:22:50.449 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1163755 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.016 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.925 00:22:52.925 real 0m11.731s 00:22:52.925 user 0m33.100s 00:22:52.925 sys 0m3.271s 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.925 ************************************ 00:22:52.925 END TEST nvmf_shutdown_tc1 00:22:52.925 ************************************ 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.925 ************************************ 00:22:52.925 START TEST nvmf_shutdown_tc2 00:22:52.925 ************************************ 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.925 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.926 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.926 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.926 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.926 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:22:53.185 00:22:53.185 --- 10.0.0.2 ping statistics --- 00:22:53.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.185 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:53.185 00:22:53.185 --- 10.0.0.1 ping statistics --- 00:22:53.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.185 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1165225 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1165225 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1165225 ']' 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.185 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.185 [2024-12-06 19:21:03.689776] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:53.185 [2024-12-06 19:21:03.689858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.185 [2024-12-06 19:21:03.760758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.444 [2024-12-06 19:21:03.816408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.444 [2024-12-06 19:21:03.816466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.444 [2024-12-06 19:21:03.816495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.444 [2024-12-06 19:21:03.816506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.444 [2024-12-06 19:21:03.816515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.444 [2024-12-06 19:21:03.817978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.444 [2024-12-06 19:21:03.818026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.444 [2024-12-06 19:21:03.818084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:53.444 [2024-12-06 19:21:03.818086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.444 [2024-12-06 19:21:03.964404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.444 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:53.445 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:53.445 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:53.445 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.445 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.703 Malloc1 00:22:53.703 [2024-12-06 19:21:04.063247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.703 Malloc2 00:22:53.703 Malloc3 00:22:53.703 Malloc4 00:22:53.703 Malloc5 00:22:53.961 Malloc6 00:22:53.961 Malloc7 00:22:53.961 Malloc8 00:22:53.961 Malloc9 00:22:53.961 Malloc10 00:22:53.961 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.961 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:53.961 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.961 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1165286 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1165286 /var/tmp/bdevperf.sock 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1165286 ']' 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.219 { 00:22:54.219 "params": { 00:22:54.219 "name": "Nvme$subsystem", 00:22:54.219 "trtype": "$TEST_TRANSPORT", 00:22:54.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.219 "adrfam": "ipv4", 00:22:54.219 "trsvcid": "$NVMF_PORT", 00:22:54.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.219 "hdgst": ${hdgst:-false}, 00:22:54.219 "ddgst": ${ddgst:-false} 00:22:54.219 }, 00:22:54.219 "method": "bdev_nvme_attach_controller" 00:22:54.219 } 00:22:54.219 EOF 00:22:54.219 )") 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.219 { 00:22:54.219 "params": { 00:22:54.219 "name": "Nvme$subsystem", 00:22:54.219 "trtype": "$TEST_TRANSPORT", 00:22:54.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.219 "adrfam": "ipv4", 00:22:54.219 "trsvcid": "$NVMF_PORT", 00:22:54.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.219 "hdgst": ${hdgst:-false}, 00:22:54.219 "ddgst": ${ddgst:-false} 00:22:54.219 }, 00:22:54.219 "method": "bdev_nvme_attach_controller" 00:22:54.219 } 00:22:54.219 EOF 00:22:54.219 )") 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.219 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.219 { 00:22:54.219 "params": { 00:22:54.219 "name": "Nvme$subsystem", 00:22:54.219 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.220 { 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme$subsystem", 00:22:54.220 "trtype": "$TEST_TRANSPORT", 00:22:54.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "$NVMF_PORT", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.220 "hdgst": ${hdgst:-false}, 00:22:54.220 "ddgst": ${ddgst:-false} 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 } 00:22:54.220 EOF 00:22:54.220 )") 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:54.220 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme1", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme2", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme3", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme4", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme5", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme6", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.220 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.220 "hdgst": false, 00:22:54.220 "ddgst": false 00:22:54.220 }, 00:22:54.220 "method": "bdev_nvme_attach_controller" 00:22:54.220 },{ 00:22:54.220 "params": { 00:22:54.220 "name": "Nvme7", 00:22:54.220 "trtype": "tcp", 00:22:54.220 "traddr": "10.0.0.2", 00:22:54.220 "adrfam": "ipv4", 00:22:54.220 "trsvcid": "4420", 00:22:54.220 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.221 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.221 "hdgst": false, 00:22:54.221 "ddgst": false 00:22:54.221 }, 00:22:54.221 "method": "bdev_nvme_attach_controller" 00:22:54.221 },{ 00:22:54.221 "params": { 00:22:54.221 "name": "Nvme8", 00:22:54.221 "trtype": "tcp", 00:22:54.221 "traddr": "10.0.0.2", 00:22:54.221 "adrfam": "ipv4", 00:22:54.221 "trsvcid": "4420", 00:22:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.221 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.221 "hdgst": false, 00:22:54.221 "ddgst": false 00:22:54.221 }, 00:22:54.221 "method": "bdev_nvme_attach_controller" 00:22:54.221 },{ 00:22:54.221 "params": { 00:22:54.221 "name": "Nvme9", 00:22:54.221 "trtype": "tcp", 00:22:54.221 "traddr": "10.0.0.2", 00:22:54.221 "adrfam": "ipv4", 00:22:54.221 "trsvcid": "4420", 00:22:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.221 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.221 "hdgst": false, 00:22:54.221 "ddgst": false 00:22:54.221 }, 00:22:54.221 "method": "bdev_nvme_attach_controller" 00:22:54.221 },{ 00:22:54.221 "params": { 00:22:54.221 "name": "Nvme10", 00:22:54.221 "trtype": "tcp", 00:22:54.221 "traddr": "10.0.0.2", 00:22:54.221 "adrfam": "ipv4", 00:22:54.221 "trsvcid": "4420", 00:22:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.221 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.221 "hdgst": false, 00:22:54.221 "ddgst": false 00:22:54.221 }, 00:22:54.221 "method": "bdev_nvme_attach_controller" 00:22:54.221 }' 00:22:54.221 [2024-12-06 19:21:04.588786] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:54.221 [2024-12-06 19:21:04.588864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165286 ] 00:22:54.221 [2024-12-06 19:21:04.663379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.221 [2024-12-06 19:21:04.725202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.115 Running I/O for 10 seconds... 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:56.115 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1165286 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1165286 ']' 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1165286 00:22:56.373 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1165286 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1165286' 00:22:56.631 killing process with pid 1165286 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1165286 00:22:56.631 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1165286 00:22:56.631 Received shutdown signal, test time was about 0.738791 seconds 00:22:56.631 00:22:56.631 Latency(us) 00:22:56.631 [2024-12-06T18:21:07.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.631 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme1n1 : 0.72 267.73 16.73 0.00 0.00 235443.14 18932.62 250104.79 00:22:56.631 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme2n1 : 0.74 261.12 16.32 0.00 0.00 235424.93 21554.06 260978.92 00:22:56.631 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme3n1 : 0.73 264.15 16.51 0.00 0.00 225550.79 20874.43 243891.01 00:22:56.631 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme4n1 : 0.72 266.16 16.63 0.00 0.00 218213.89 17282.09 259425.47 00:22:56.631 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme5n1 : 0.73 262.59 16.41 0.00 0.00 215907.24 21845.33 236123.78 00:22:56.631 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme6n1 : 0.70 183.50 11.47 0.00 0.00 296000.47 20874.43 257872.02 00:22:56.631 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme7n1 : 0.70 181.70 11.36 0.00 0.00 293112.04 38836.15 242337.56 00:22:56.631 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme8n1 : 0.74 260.17 16.26 0.00 0.00 200572.27 18447.17 256318.58 00:22:56.631 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme9n1 : 0.70 183.89 11.49 0.00 0.00 271128.65 18544.26 250104.79 00:22:56.631 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.631 Verification LBA range: start 0x0 length 0x400 00:22:56.631 Nvme10n1 : 0.71 179.66 11.23 0.00 0.00 270493.77 23884.23 282727.16 00:22:56.631 [2024-12-06T18:21:07.208Z] =================================================================================================================== 00:22:56.631 [2024-12-06T18:21:07.208Z] Total : 2310.66 144.42 0.00 0.00 240569.49 17282.09 282727.16 00:22:56.889 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1165225 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.819 rmmod nvme_tcp 00:22:57.819 rmmod nvme_fabrics 00:22:57.819 rmmod nvme_keyring 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1165225 ']' 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1165225 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1165225 ']' 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1165225 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.819 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1165225 00:22:58.077 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.077 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.077 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1165225' 00:22:58.077 killing process with pid 1165225 00:22:58.077 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1165225 00:22:58.077 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1165225 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.334 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.863 00:23:00.863 real 0m7.473s 00:23:00.863 user 0m22.532s 00:23:00.863 sys 0m1.406s 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.863 ************************************ 00:23:00.863 END TEST nvmf_shutdown_tc2 00:23:00.863 ************************************ 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.863 ************************************ 00:23:00.863 START TEST nvmf_shutdown_tc3 00:23:00.863 ************************************ 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.863 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.863 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:00.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:00.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:00.864 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:00.864 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:23:00.864 00:23:00.864 --- 10.0.0.2 ping statistics --- 00:23:00.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.864 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:23:00.864 00:23:00.864 --- 10.0.0.1 ping statistics --- 00:23:00.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.864 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1166704 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1166704 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1166704 ']' 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.864 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.864 [2024-12-06 19:21:11.233193] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:00.864 [2024-12-06 19:21:11.233284] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.865 [2024-12-06 19:21:11.306258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.865 [2024-12-06 19:21:11.363387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.865 [2024-12-06 19:21:11.363445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.865 [2024-12-06 19:21:11.363472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.865 [2024-12-06 19:21:11.363483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.865 [2024-12-06 19:21:11.363492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.865 [2024-12-06 19:21:11.365083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.865 [2024-12-06 19:21:11.365109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.865 [2024-12-06 19:21:11.365166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.865 [2024-12-06 19:21:11.365170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.123 [2024-12-06 19:21:11.519073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.123 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.124 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.124 Malloc1 00:23:01.124 [2024-12-06 19:21:11.617480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.124 Malloc2 00:23:01.124 Malloc3 00:23:01.383 Malloc4 00:23:01.383 Malloc5 00:23:01.383 Malloc6 00:23:01.383 Malloc7 00:23:01.383 Malloc8 00:23:01.641 Malloc9 00:23:01.641 Malloc10 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1166884 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1166884 /var/tmp/bdevperf.sock 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1166884 ']' 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.642 "hdgst": ${hdgst:-false}, 00:23:01.642 "ddgst": ${ddgst:-false} 00:23:01.642 }, 00:23:01.642 "method": "bdev_nvme_attach_controller" 00:23:01.642 } 00:23:01.642 EOF 00:23:01.642 )") 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.642 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.642 { 00:23:01.642 "params": { 00:23:01.642 "name": "Nvme$subsystem", 00:23:01.642 "trtype": "$TEST_TRANSPORT", 00:23:01.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.642 "adrfam": "ipv4", 00:23:01.642 "trsvcid": "$NVMF_PORT", 00:23:01.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.643 "hdgst": ${hdgst:-false}, 00:23:01.643 "ddgst": ${ddgst:-false} 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 } 00:23:01.643 EOF 00:23:01.643 )") 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.643 { 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme$subsystem", 00:23:01.643 "trtype": "$TEST_TRANSPORT", 00:23:01.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "$NVMF_PORT", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.643 "hdgst": ${hdgst:-false}, 00:23:01.643 "ddgst": ${ddgst:-false} 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 } 00:23:01.643 EOF 00:23:01.643 )") 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.643 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme1", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme2", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme3", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme4", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme5", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme6", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme7", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme8", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme9", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 },{ 00:23:01.643 "params": { 00:23:01.643 "name": "Nvme10", 00:23:01.643 "trtype": "tcp", 00:23:01.643 "traddr": "10.0.0.2", 00:23:01.643 "adrfam": "ipv4", 00:23:01.643 "trsvcid": "4420", 00:23:01.643 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.643 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.643 "hdgst": false, 00:23:01.643 "ddgst": false 00:23:01.643 }, 00:23:01.643 "method": "bdev_nvme_attach_controller" 00:23:01.643 }' 00:23:01.643 [2024-12-06 19:21:12.129613] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:01.643 [2024-12-06 19:21:12.129722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166884 ] 00:23:01.643 [2024-12-06 19:21:12.200025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.901 [2024-12-06 19:21:12.259938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.275 Running I/O for 10 seconds... 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:03.841 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:03.842 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1166704 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1166704 ']' 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1166704 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166704 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166704' 00:23:04.116 killing process with pid 1166704 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1166704 00:23:04.116 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1166704 00:23:04.116 [2024-12-06 19:21:14.598614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.598988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.116 [2024-12-06 19:21:14.599211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.599473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d30 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.600997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.117 [2024-12-06 19:21:14.601677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854e70 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.603869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4200 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.606919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.606946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.606960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.606983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.606995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.118 [2024-12-06 19:21:14.607207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.607699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5090 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.608988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.119 [2024-12-06 19:21:14.609395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.609634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5410 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.120 [2024-12-06 19:21:14.611747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.611880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5790 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.612928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.612969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.612984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.612997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 19:21:14.613322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:04.121 the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 19:21:14.613384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with [2024-12-06 19:21:14.613434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:23:04.121 id:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22328a0 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with [2024-12-06 19:21:14.613579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:23:04.121 id:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 [2024-12-06 19:21:14.613631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.121 [2024-12-06 19:21:14.613644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.121 [2024-12-06 19:21:14.613657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 19:21:14.613658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.121 the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with [2024-12-06 19:21:14.613683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239f60 is same the state(6) to be set 00:23:04.122 with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 19:21:14.613774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5c60 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:04.122 the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.613806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.613819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.613833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.613847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.613861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.613874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.613886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5310 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.613933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.613954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.613978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.613991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2232a80 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db9130 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f05c0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2d110 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-06 19:21:14.614660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:04.122 the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-06 19:21:14.614720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:04.122 the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with [2024-12-06 19:21:14.614733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:04.122 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.122 [2024-12-06 19:21:14.614760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.122 [2024-12-06 19:21:14.614772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4e80 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.122 [2024-12-06 19:21:14.614821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with [2024-12-06 19:21:14.614829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:23:04.123 id:0 cdw10:00000000 cdw11:00000000 00:23:04.123 [2024-12-06 19:21:14.614848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.614860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.123 [2024-12-06 19:21:14.614873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.614885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-06 19:21:14.614897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:04.123 the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with [2024-12-06 19:21:14.614910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:04.123 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.614924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.123 [2024-12-06 19:21:14.614936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.614959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaea0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.614983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18549a0 is same with the state(6) to be set 00:23:04.123 [2024-12-06 19:21:14.615537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.615978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.615994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.123 [2024-12-06 19:21:14.616009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.123 [2024-12-06 19:21:14.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.616993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.124 [2024-12-06 19:21:14.617247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.124 [2024-12-06 19:21:14.617261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.125 [2024-12-06 19:21:14.617770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.617968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.617988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.125 [2024-12-06 19:21:14.618534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.125 [2024-12-06 19:21:14.618548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.618980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.618996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.126 [2024-12-06 19:21:14.619814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.126 [2024-12-06 19:21:14.619828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.619842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7da0 is same with the state(6) to be set 00:23:04.127 [2024-12-06 19:21:14.623018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:04.127 [2024-12-06 19:21:14.623074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:04.127 [2024-12-06 19:21:14.623104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db9130 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.623129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbaea0 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.623706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22328a0 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.623758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239f60 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.623826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.127 [2024-12-06 19:21:14.623853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.623881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.127 [2024-12-06 19:21:14.623897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.623912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.127 [2024-12-06 19:21:14.623925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.623947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.127 [2024-12-06 19:21:14.623961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.623974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a140 is same with the state(6) to be set 00:23:04.127 [2024-12-06 19:21:14.624012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5310 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.624048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2232a80 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.624080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f05c0 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.624111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2d110 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.624144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4e80 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.624932] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625010] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625093] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625295] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.127 [2024-12-06 19:21:14.625494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbaea0 with addr=10.0.0.2, port=4420 00:23:04.127 [2024-12-06 19:21:14.625512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaea0 is same with the state(6) to be set 00:23:04.127 [2024-12-06 19:21:14.625602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.127 [2024-12-06 19:21:14.625628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db9130 with addr=10.0.0.2, port=4420 00:23:04.127 [2024-12-06 19:21:14.625644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db9130 is same with the state(6) to be set 00:23:04.127 [2024-12-06 19:21:14.625742] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625810] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.625945] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.626024] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:04.127 [2024-12-06 19:21:14.626070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbaea0 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.626097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db9130 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.626217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:04.127 [2024-12-06 19:21:14.626239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:04.127 [2024-12-06 19:21:14.626258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:04.127 [2024-12-06 19:21:14.626276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:04.127 [2024-12-06 19:21:14.626293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:04.127 [2024-12-06 19:21:14.626306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:04.127 [2024-12-06 19:21:14.626319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:04.127 [2024-12-06 19:21:14.626337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:04.127 [2024-12-06 19:21:14.633765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a140 (9): Bad file descriptor 00:23:04.127 [2024-12-06 19:21:14.634062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.127 [2024-12-06 19:21:14.634686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.127 [2024-12-06 19:21:14.634702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.634979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.634993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.128 [2024-12-06 19:21:14.635910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.128 [2024-12-06 19:21:14.635924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.635940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.635954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.635974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.635989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.636024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.636041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.636056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.636072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.636102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc90c0 is same with the state(6) to be set 00:23:04.129 [2024-12-06 19:21:14.637410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.637993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.129 [2024-12-06 19:21:14.638324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.129 [2024-12-06 19:21:14.638338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.638975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.638991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.639397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.639412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fca210 is same with the state(6) to be set 00:23:04.130 [2024-12-06 19:21:14.640655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.640685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.640707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.640722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.640738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.640768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.640782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.640808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.130 [2024-12-06 19:21:14.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.130 [2024-12-06 19:21:14.640837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.640851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.640866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.640881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.640931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.640945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.640975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.640991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.641974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.641988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.642007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.642022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.131 [2024-12-06 19:21:14.642038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.131 [2024-12-06 19:21:14.642053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.642646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.642660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8fa0 is same with the state(6) to be set 00:23:04.132 [2024-12-06 19:21:14.643939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.643962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.643984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.132 [2024-12-06 19:21:14.644545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.132 [2024-12-06 19:21:14.644561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.644978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.644994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.133 [2024-12-06 19:21:14.645746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.133 [2024-12-06 19:21:14.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.645784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.645798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.645813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.645827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.645842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.645856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.645871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.645885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.645899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca260 is same with the state(6) to be set 00:23:04.134 [2024-12-06 19:21:14.647146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.647980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.647995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.648010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.648025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.648040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.648056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.648073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.648095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.134 [2024-12-06 19:21:14.648113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.134 [2024-12-06 19:21:14.648127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.648983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.649012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.649028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.649042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.649057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.649071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.649115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.649128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.649142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cb520 is same with the state(6) to be set 00:23:04.135 [2024-12-06 19:21:14.650379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.650423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.650490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.650519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.135 [2024-12-06 19:21:14.650548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.135 [2024-12-06 19:21:14.650562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.650976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.650990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.136 [2024-12-06 19:21:14.651767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.136 [2024-12-06 19:21:14.651781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.651978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.651997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.652336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.652351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc7e0 is same with the state(6) to be set 00:23:04.137 [2024-12-06 19:21:14.653641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.653982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.653997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.137 [2024-12-06 19:21:14.654327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.137 [2024-12-06 19:21:14.654342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.654977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.138 [2024-12-06 19:21:14.655467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.138 [2024-12-06 19:21:14.655484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.655737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.655753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20705c0 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.657020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657177] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657205] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657235] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657260] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657282] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657300] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:04.139 [2024-12-06 19:21:14.657416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.657705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.657737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc5310 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.657754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5310 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.657857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.657882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc4e80 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.657898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4e80 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.657988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.658012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f05c0 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.658029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f05c0 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.659972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.660000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:04.139 [2024-12-06 19:21:14.660121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2d110 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.660171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2d110 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.660254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.660280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2232a80 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.660296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2232a80 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.660409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22328a0 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.660426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22328a0 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.139 [2024-12-06 19:21:14.660541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239f60 with addr=10.0.0.2, port=4420 00:23:04.139 [2024-12-06 19:21:14.660557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239f60 is same with the state(6) to be set 00:23:04.139 [2024-12-06 19:21:14.660581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5310 (9): Bad file descriptor 00:23:04.139 [2024-12-06 19:21:14.660601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4e80 (9): Bad file descriptor 00:23:04.139 [2024-12-06 19:21:14.660619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f05c0 (9): Bad file descriptor 00:23:04.139 [2024-12-06 19:21:14.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.660973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.660986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.139 [2024-12-06 19:21:14.661215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.139 [2024-12-06 19:21:14.661232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.661977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.661993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.140 [2024-12-06 19:21:14.662478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.140 [2024-12-06 19:21:14.662494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.141 [2024-12-06 19:21:14.662799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.141 [2024-12-06 19:21:14.662813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206f380 is same with the state(6) to be set 00:23:04.400 task offset: 27392 on job bdev=Nvme3n1 fails 00:23:04.400 00:23:04.400 Latency(us) 00:23:04.400 [2024-12-06T18:21:14.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.400 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.400 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:04.400 Verification LBA range: start 0x0 length 0x400 00:23:04.400 Nvme1n1 : 0.94 203.59 12.72 67.86 0.00 233161.20 18155.90 254765.13 00:23:04.400 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.400 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:04.400 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme2n1 : 0.95 202.88 12.68 67.63 0.00 229222.59 18252.99 254765.13 00:23:04.401 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme3n1 ended in about 0.93 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme3n1 : 0.93 207.03 12.94 69.01 0.00 219920.55 5291.43 256318.58 00:23:04.401 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme4n1 ended in about 0.93 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme4n1 : 0.93 206.79 12.92 68.93 0.00 215622.92 9126.49 259425.47 00:23:04.401 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme5n1 : 0.95 134.79 8.42 67.40 0.00 288508.08 22719.15 250104.79 00:23:04.401 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme6n1 : 0.95 139.59 8.72 67.17 0.00 276360.59 20194.80 251658.24 00:23:04.401 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme7n1 : 0.96 133.88 8.37 66.94 0.00 278658.97 34758.35 254765.13 00:23:04.401 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme8n1 : 0.96 204.32 12.77 66.72 0.00 202168.29 18350.08 248551.35 00:23:04.401 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme9n1 : 0.97 132.00 8.25 66.00 0.00 271298.31 20388.98 262532.36 00:23:04.401 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.401 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:04.401 Verification LBA range: start 0x0 length 0x400 00:23:04.401 Nvme10n1 : 0.96 132.97 8.31 66.48 0.00 262924.52 20777.34 284280.60 00:23:04.401 [2024-12-06T18:21:14.978Z] =================================================================================================================== 00:23:04.401 [2024-12-06T18:21:14.978Z] Total : 1697.84 106.12 674.14 0.00 243816.37 5291.43 284280.60 00:23:04.401 [2024-12-06 19:21:14.693183] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:04.401 [2024-12-06 19:21:14.693255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.693485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.401 [2024-12-06 19:21:14.693521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db9130 with addr=10.0.0.2, port=4420 00:23:04.401 [2024-12-06 19:21:14.693542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db9130 is same with the state(6) to be set 00:23:04.401 [2024-12-06 19:21:14.693630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.401 [2024-12-06 19:21:14.693657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbaea0 with addr=10.0.0.2, port=4420 00:23:04.401 [2024-12-06 19:21:14.693682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaea0 is same with the state(6) to be set 00:23:04.401 [2024-12-06 19:21:14.693709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2d110 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.693735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2232a80 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.693755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22328a0 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.693774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239f60 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.693792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.693807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.693825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.693845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.693865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.693888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.693902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.693915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.693929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.693942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.693954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.693966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.694274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.401 [2024-12-06 19:21:14.694309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223a140 with addr=10.0.0.2, port=4420 00:23:04.401 [2024-12-06 19:21:14.694326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a140 is same with the state(6) to be set 00:23:04.401 [2024-12-06 19:21:14.694345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db9130 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.694364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbaea0 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.694381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.694394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.694407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.694421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.694436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.694448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.694460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.694473] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.694487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.694507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.694519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.694532] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.694546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.694558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.694570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.694582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.694682] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:04.401 [2024-12-06 19:21:14.694722] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:04.401 [2024-12-06 19:21:14.695123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a140 (9): Bad file descriptor 00:23:04.401 [2024-12-06 19:21:14.695150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.695165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.695178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.695191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.695205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.695218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.695230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:04.401 [2024-12-06 19:21:14.695242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:04.401 [2024-12-06 19:21:14.695312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:04.401 [2024-12-06 19:21:14.695478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:04.401 [2024-12-06 19:21:14.695495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:04.401 [2024-12-06 19:21:14.695508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.695521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.695683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.695711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f05c0 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.695731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f05c0 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.695834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.695859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc4e80 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.695875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4e80 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.696027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc5310 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.696043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5310 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.696154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239f60 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.696170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239f60 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.696278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22328a0 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.696294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22328a0 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.696403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2232a80 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.696419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2232a80 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.402 [2024-12-06 19:21:14.696531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2d110 with addr=10.0.0.2, port=4420 00:23:04.402 [2024-12-06 19:21:14.696548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2d110 is same with the state(6) to be set 00:23:04.402 [2024-12-06 19:21:14.696592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f05c0 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4e80 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5310 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239f60 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22328a0 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2232a80 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2d110 (9): Bad file descriptor 00:23:04.402 [2024-12-06 19:21:14.696758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.696776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.696792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.696804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.696818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.696830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.696843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.696855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.696868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.696885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.696898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.696910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.696924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.696935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.696948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.696960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.696973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.696985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.696997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.697009] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.697022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.697034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.697046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.697058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:04.402 [2024-12-06 19:21:14.697071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:04.402 [2024-12-06 19:21:14.697084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:04.402 [2024-12-06 19:21:14.697097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:04.402 [2024-12-06 19:21:14.697110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:04.661 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1166884 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1166884 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1166884 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.600 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.600 rmmod nvme_tcp 00:23:05.600 rmmod nvme_fabrics 00:23:05.600 rmmod nvme_keyring 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1166704 ']' 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1166704 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1166704 ']' 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1166704 00:23:05.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1166704) - No such process 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1166704 is not found' 00:23:05.860 Process with pid 1166704 is not found 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.860 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.764 00:23:07.764 real 0m7.252s 00:23:07.764 user 0m17.382s 00:23:07.764 sys 0m1.422s 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.764 ************************************ 00:23:07.764 END TEST nvmf_shutdown_tc3 00:23:07.764 ************************************ 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.764 ************************************ 00:23:07.764 START TEST nvmf_shutdown_tc4 00:23:07.764 ************************************ 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:07.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:07.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.764 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:07.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:07.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.765 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.022 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:23:08.022 00:23:08.022 --- 10.0.0.2 ping statistics --- 00:23:08.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.023 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:23:08.023 00:23:08.023 --- 10.0.0.1 ping statistics --- 00:23:08.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.023 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1167786 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1167786 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1167786 ']' 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.023 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.023 [2024-12-06 19:21:18.533721] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:08.023 [2024-12-06 19:21:18.533809] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.281 [2024-12-06 19:21:18.603727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.281 [2024-12-06 19:21:18.660849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.281 [2024-12-06 19:21:18.660907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.281 [2024-12-06 19:21:18.660938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.281 [2024-12-06 19:21:18.660950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.281 [2024-12-06 19:21:18.660960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.281 [2024-12-06 19:21:18.662448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.281 [2024-12-06 19:21:18.662514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.281 [2024-12-06 19:21:18.662580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.281 [2024-12-06 19:21:18.662583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.281 [2024-12-06 19:21:18.801463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.281 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.539 Malloc1 00:23:08.539 [2024-12-06 19:21:18.888787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.539 Malloc2 00:23:08.539 Malloc3 00:23:08.539 Malloc4 00:23:08.539 Malloc5 00:23:08.539 Malloc6 00:23:08.797 Malloc7 00:23:08.797 Malloc8 00:23:08.797 Malloc9 00:23:08.797 Malloc10 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1167881 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:08.797 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:09.054 [2024-12-06 19:21:19.412678] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1167786 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1167786 ']' 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1167786 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1167786 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1167786' 00:23:14.375 killing process with pid 1167786 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1167786 00:23:14.375 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1167786 00:23:14.375 [2024-12-06 19:21:24.414532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175ac10 is same with the state(6) to be set 00:23:14.375 [2024-12-06 19:21:24.414648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175ac10 is same with the state(6) to be set 00:23:14.375 [2024-12-06 19:21:24.414676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175ac10 is same with the state(6) to be set 00:23:14.375 [2024-12-06 19:21:24.414696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175ac10 is same with the state(6) to be set 00:23:14.375 [2024-12-06 19:21:24.416493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eb9a0 is same with the state(6) to be set 00:23:14.375 [2024-12-06 19:21:24.416529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eb9a0 is same with the state(6) to be set 00:23:14.376 [2024-12-06 19:21:24.416545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eb9a0 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.421717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with starting I/O failed: -6 00:23:14.376 the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.422241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with Write completed with error (sct=0, sc=8) 00:23:14.376 the state(6) to be set 00:23:14.376 [2024-12-06 19:21:24.422269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with the state(6) to be set 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.422308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175be60 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with starting I/O failed: -6 00:23:14.376 the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.422748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 [2024-12-06 19:21:24.422777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with Write completed with error (sct=0, sc=8) 00:23:14.376 the state(6) to be set 00:23:14.376 [2024-12-06 19:21:24.422791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.422804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 [2024-12-06 19:21:24.422817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.422830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c1e0 is same with the state(6) to be set 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.422869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 starting I/O failed: -6 00:23:14.376 Write completed with error (sct=0, sc=8) 00:23:14.376 [2024-12-06 19:21:24.423270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with Write completed with error (sct=0, sc=8) 00:23:14.376 the state(6) to be set 00:23:14.376 starting I/O failed: -6 00:23:14.376 [2024-12-06 19:21:24.423305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with Write completed with error (sct=0, sc=8) 00:23:14.376 the state(6) to be set 00:23:14.376 starting I/O failed: -6 00:23:14.377 [2024-12-06 19:21:24.423322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with the state(6) to be set 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 [2024-12-06 19:21:24.423339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with the state(6) to be set 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 [2024-12-06 19:21:24.423352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with the state(6) to be set 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 [2024-12-06 19:21:24.423365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b4a0 is same with the state(6) to be set 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 [2024-12-06 19:21:24.424068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 [2024-12-06 19:21:24.425822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.377 NVMe io qpair process completion error 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 starting I/O failed: -6 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.377 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.427108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.378 starting I/O failed: -6 00:23:14.378 starting I/O failed: -6 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with Write completed with error (sct=0, sc=8) 00:23:14.378 the state(6) to be set 00:23:14.378 starting I/O failed: -6 00:23:14.378 [2024-12-06 19:21:24.428547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 [2024-12-06 19:21:24.428562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 starting I/O failed: -6 00:23:14.378 [2024-12-06 19:21:24.428588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with Write completed with error (sct=0, sc=8) 00:23:14.378 the state(6) to be set 00:23:14.378 [2024-12-06 19:21:24.428601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 starting I/O failed: -6 00:23:14.378 [2024-12-06 19:21:24.428638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 [2024-12-06 19:21:24.428650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 starting I/O failed: -6 00:23:14.378 [2024-12-06 19:21:24.428662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175c6b0 is same with the state(6) to be set 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.378 starting I/O failed: -6 00:23:14.378 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 [2024-12-06 19:21:24.429386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 [2024-12-06 19:21:24.431777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.379 NVMe io qpair process completion error 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 [2024-12-06 19:21:24.433020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 [2024-12-06 19:21:24.434102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 [2024-12-06 19:21:24.435234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.379 starting I/O failed: -6 00:23:14.379 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 [2024-12-06 19:21:24.437116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.380 NVMe io qpair process completion error 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 [2024-12-06 19:21:24.438455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 [2024-12-06 19:21:24.439577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 [2024-12-06 19:21:24.440707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.380 starting I/O failed: -6 00:23:14.380 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 [2024-12-06 19:21:24.442425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.381 NVMe io qpair process completion error 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 [2024-12-06 19:21:24.443737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 [2024-12-06 19:21:24.444849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 [2024-12-06 19:21:24.445972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 [2024-12-06 19:21:24.448178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.381 NVMe io qpair process completion error 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 Write completed with error (sct=0, sc=8) 00:23:14.381 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 [2024-12-06 19:21:24.449432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 [2024-12-06 19:21:24.450513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 [2024-12-06 19:21:24.451716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 [2024-12-06 19:21:24.455084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.382 NVMe io qpair process completion error 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 [2024-12-06 19:21:24.456342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 starting I/O failed: -6 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.382 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 [2024-12-06 19:21:24.457459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 [2024-12-06 19:21:24.458637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 [2024-12-06 19:21:24.462710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.383 NVMe io qpair process completion error 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.383 Write completed with error (sct=0, sc=8) 00:23:14.383 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 [2024-12-06 19:21:24.468465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 [2024-12-06 19:21:24.469469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 [2024-12-06 19:21:24.470688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.384 Write completed with error (sct=0, sc=8) 00:23:14.384 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 [2024-12-06 19:21:24.472507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.385 NVMe io qpair process completion error 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 [2024-12-06 19:21:24.473783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.385 starting I/O failed: -6 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 [2024-12-06 19:21:24.474888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.385 starting I/O failed: -6 00:23:14.385 starting I/O failed: -6 00:23:14.385 starting I/O failed: -6 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 [2024-12-06 19:21:24.476275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.385 Write completed with error (sct=0, sc=8) 00:23:14.385 starting I/O failed: -6 00:23:14.386 [2024-12-06 19:21:24.478815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.386 NVMe io qpair process completion error 00:23:14.386 Initializing NVMe Controllers 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:14.386 Controller IO queue size 128, less than required. 00:23:14.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:14.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:14.386 Initialization complete. Launching workers. 00:23:14.386 ======================================================== 00:23:14.386 Latency(us) 00:23:14.386 Device Information : IOPS MiB/s Average min max 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1859.84 79.92 68832.42 855.00 132849.87 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1800.82 77.38 71109.49 910.40 134736.54 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1860.91 79.96 68838.46 875.47 137379.97 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1848.51 79.43 68472.57 992.54 120160.34 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1847.87 79.40 68520.48 843.60 118925.56 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1855.78 79.74 68258.45 1134.47 118051.17 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1775.59 76.29 71366.62 1157.89 118756.62 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1775.16 76.28 71407.31 916.49 121199.38 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1782.00 76.57 71162.95 893.17 124061.13 00:23:14.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1816.86 78.07 69846.42 1049.05 115278.84 00:23:14.386 ======================================================== 00:23:14.386 Total : 18223.32 783.03 69758.21 843.60 137379.97 00:23:14.386 00:23:14.386 [2024-12-06 19:21:24.485142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d900 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b6b0 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c2c0 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dae0 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b9e0 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d720 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2cc50 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c920 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2bd10 is same with the state(6) to be set 00:23:14.386 [2024-12-06 19:21:24.485758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c5f0 is same with the state(6) to be set 00:23:14.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:14.666 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1167881 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1167881 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1167881 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.600 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.600 rmmod nvme_tcp 00:23:15.600 rmmod nvme_fabrics 00:23:15.600 rmmod nvme_keyring 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1167786 ']' 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1167786 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1167786 ']' 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1167786 00:23:15.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1167786) - No such process 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1167786 is not found' 00:23:15.600 Process with pid 1167786 is not found 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.600 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.503 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.503 00:23:17.503 real 0m9.773s 00:23:17.503 user 0m23.871s 00:23:17.503 sys 0m5.538s 00:23:17.503 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.503 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:17.503 ************************************ 00:23:17.503 END TEST nvmf_shutdown_tc4 00:23:17.503 ************************************ 00:23:17.763 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:17.763 00:23:17.763 real 0m36.588s 00:23:17.763 user 1m37.065s 00:23:17.763 sys 0m11.838s 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.764 ************************************ 00:23:17.764 END TEST nvmf_shutdown 00:23:17.764 ************************************ 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.764 ************************************ 00:23:17.764 START TEST nvmf_nsid 00:23:17.764 ************************************ 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:17.764 * Looking for test storage... 00:23:17.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.764 --rc genhtml_branch_coverage=1 00:23:17.764 --rc genhtml_function_coverage=1 00:23:17.764 --rc genhtml_legend=1 00:23:17.764 --rc geninfo_all_blocks=1 00:23:17.764 --rc geninfo_unexecuted_blocks=1 00:23:17.764 00:23:17.764 ' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.764 --rc genhtml_branch_coverage=1 00:23:17.764 --rc genhtml_function_coverage=1 00:23:17.764 --rc genhtml_legend=1 00:23:17.764 --rc geninfo_all_blocks=1 00:23:17.764 --rc geninfo_unexecuted_blocks=1 00:23:17.764 00:23:17.764 ' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.764 --rc genhtml_branch_coverage=1 00:23:17.764 --rc genhtml_function_coverage=1 00:23:17.764 --rc genhtml_legend=1 00:23:17.764 --rc geninfo_all_blocks=1 00:23:17.764 --rc geninfo_unexecuted_blocks=1 00:23:17.764 00:23:17.764 ' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.764 --rc genhtml_branch_coverage=1 00:23:17.764 --rc genhtml_function_coverage=1 00:23:17.764 --rc genhtml_legend=1 00:23:17.764 --rc geninfo_all_blocks=1 00:23:17.764 --rc geninfo_unexecuted_blocks=1 00:23:17.764 00:23:17.764 ' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.764 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:20.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:20.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.295 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:20.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:20.296 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:23:20.296 00:23:20.296 --- 10.0.0.2 ping statistics --- 00:23:20.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.296 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:20.296 00:23:20.296 --- 10.0.0.1 ping statistics --- 00:23:20.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.296 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1170628 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1170628 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1170628 ']' 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.296 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.296 [2024-12-06 19:21:30.752308] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:20.296 [2024-12-06 19:21:30.752401] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.296 [2024-12-06 19:21:30.825863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.554 [2024-12-06 19:21:30.881901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.554 [2024-12-06 19:21:30.881956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.554 [2024-12-06 19:21:30.881983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.554 [2024-12-06 19:21:30.881995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.554 [2024-12-06 19:21:30.882004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.554 [2024-12-06 19:21:30.882602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.554 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.554 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:20.555 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.555 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.555 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1170746 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3e21f15b-2f8f-46dd-adca-8c1fe95fd412 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=dc5733ef-994e-4ab2-b091-b8f7dc7d7e30 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9b4c4b10-a0f2-4d4c-87e0-0e36bab56572 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.555 null0 00:23:20.555 null1 00:23:20.555 null2 00:23:20.555 [2024-12-06 19:21:31.063641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.555 [2024-12-06 19:21:31.076977] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:20.555 [2024-12-06 19:21:31.077056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170746 ] 00:23:20.555 [2024-12-06 19:21:31.087897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1170746 /var/tmp/tgt2.sock 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1170746 ']' 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:20.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.555 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:20.813 [2024-12-06 19:21:31.144438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.813 [2024-12-06 19:21:31.201887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.071 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.071 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:21.071 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:21.329 [2024-12-06 19:21:31.902181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.588 [2024-12-06 19:21:31.918324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:21.588 nvme0n1 nvme0n2 00:23:21.588 nvme1n1 00:23:21.588 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:21.588 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:21.588 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:22.154 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3e21f15b-2f8f-46dd-adca-8c1fe95fd412 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3e21f15b2f8f46ddadca8c1fe95fd412 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3E21F15B2F8F46DDADCA8C1FE95FD412 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3E21F15B2F8F46DDADCA8C1FE95FD412 == \3\E\2\1\F\1\5\B\2\F\8\F\4\6\D\D\A\D\C\A\8\C\1\F\E\9\5\F\D\4\1\2 ]] 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid dc5733ef-994e-4ab2-b091-b8f7dc7d7e30 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc5733ef994e4ab2b091b8f7dc7d7e30 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC5733EF994E4AB2B091B8F7DC7D7E30 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DC5733EF994E4AB2B091B8F7DC7D7E30 == \D\C\5\7\3\3\E\F\9\9\4\E\4\A\B\2\B\0\9\1\B\8\F\7\D\C\7\D\7\E\3\0 ]] 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9b4c4b10-a0f2-4d4c-87e0-0e36bab56572 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:23.088 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b4c4b10a0f24d4c87e00e36bab56572 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B4C4B10A0F24D4C87E00E36BAB56572 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9B4C4B10A0F24D4C87E00E36BAB56572 == \9\B\4\C\4\B\1\0\A\0\F\2\4\D\4\C\8\7\E\0\0\E\3\6\B\A\B\5\6\5\7\2 ]] 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1170746 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1170746 ']' 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1170746 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170746 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170746' 00:23:23.347 killing process with pid 1170746 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1170746 00:23:23.347 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1170746 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.912 rmmod nvme_tcp 00:23:23.912 rmmod nvme_fabrics 00:23:23.912 rmmod nvme_keyring 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1170628 ']' 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1170628 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1170628 ']' 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1170628 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170628 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170628' 00:23:23.912 killing process with pid 1170628 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1170628 00:23:23.912 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1170628 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.171 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.705 00:23:26.705 real 0m8.532s 00:23:26.705 user 0m8.361s 00:23:26.705 sys 0m2.712s 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.705 ************************************ 00:23:26.705 END TEST nvmf_nsid 00:23:26.705 ************************************ 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:26.705 00:23:26.705 real 11m41.875s 00:23:26.705 user 27m24.634s 00:23:26.705 sys 2m51.891s 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.705 19:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:26.705 ************************************ 00:23:26.705 END TEST nvmf_target_extra 00:23:26.705 ************************************ 00:23:26.705 19:21:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:26.705 19:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.705 19:21:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.705 19:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.705 ************************************ 00:23:26.705 START TEST nvmf_host 00:23:26.705 ************************************ 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:26.705 * Looking for test storage... 00:23:26.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.705 --rc genhtml_branch_coverage=1 00:23:26.705 --rc genhtml_function_coverage=1 00:23:26.705 --rc genhtml_legend=1 00:23:26.705 --rc geninfo_all_blocks=1 00:23:26.705 --rc geninfo_unexecuted_blocks=1 00:23:26.705 00:23:26.705 ' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.705 --rc genhtml_branch_coverage=1 00:23:26.705 --rc genhtml_function_coverage=1 00:23:26.705 --rc genhtml_legend=1 00:23:26.705 --rc geninfo_all_blocks=1 00:23:26.705 --rc geninfo_unexecuted_blocks=1 00:23:26.705 00:23:26.705 ' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.705 --rc genhtml_branch_coverage=1 00:23:26.705 --rc genhtml_function_coverage=1 00:23:26.705 --rc genhtml_legend=1 00:23:26.705 --rc geninfo_all_blocks=1 00:23:26.705 --rc geninfo_unexecuted_blocks=1 00:23:26.705 00:23:26.705 ' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.705 --rc genhtml_branch_coverage=1 00:23:26.705 --rc genhtml_function_coverage=1 00:23:26.705 --rc genhtml_legend=1 00:23:26.705 --rc geninfo_all_blocks=1 00:23:26.705 --rc geninfo_unexecuted_blocks=1 00:23:26.705 00:23:26.705 ' 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.705 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.706 ************************************ 00:23:26.706 START TEST nvmf_multicontroller 00:23:26.706 ************************************ 00:23:26.706 19:21:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:26.706 * Looking for test storage... 00:23:26.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.706 --rc genhtml_branch_coverage=1 00:23:26.706 --rc genhtml_function_coverage=1 00:23:26.706 --rc genhtml_legend=1 00:23:26.706 --rc geninfo_all_blocks=1 00:23:26.706 --rc geninfo_unexecuted_blocks=1 00:23:26.706 00:23:26.706 ' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.706 --rc genhtml_branch_coverage=1 00:23:26.706 --rc genhtml_function_coverage=1 00:23:26.706 --rc genhtml_legend=1 00:23:26.706 --rc geninfo_all_blocks=1 00:23:26.706 --rc geninfo_unexecuted_blocks=1 00:23:26.706 00:23:26.706 ' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.706 --rc genhtml_branch_coverage=1 00:23:26.706 --rc genhtml_function_coverage=1 00:23:26.706 --rc genhtml_legend=1 00:23:26.706 --rc geninfo_all_blocks=1 00:23:26.706 --rc geninfo_unexecuted_blocks=1 00:23:26.706 00:23:26.706 ' 00:23:26.706 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.707 --rc genhtml_branch_coverage=1 00:23:26.707 --rc genhtml_function_coverage=1 00:23:26.707 --rc genhtml_legend=1 00:23:26.707 --rc geninfo_all_blocks=1 00:23:26.707 --rc geninfo_unexecuted_blocks=1 00:23:26.707 00:23:26.707 ' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.707 19:21:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.612 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.870 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:28.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:28.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:28.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:28.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:23:28.871 00:23:28.871 --- 10.0.0.2 ping statistics --- 00:23:28.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.871 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:23:28.871 00:23:28.871 --- 10.0.0.1 ping statistics --- 00:23:28.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.871 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1173194 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1173194 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1173194 ']' 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.871 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.872 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.130 [2024-12-06 19:21:39.492039] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:29.130 [2024-12-06 19:21:39.492121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.130 [2024-12-06 19:21:39.558131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:29.130 [2024-12-06 19:21:39.613121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.130 [2024-12-06 19:21:39.613180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.130 [2024-12-06 19:21:39.613208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.130 [2024-12-06 19:21:39.613219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.130 [2024-12-06 19:21:39.613228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.130 [2024-12-06 19:21:39.614755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.130 [2024-12-06 19:21:39.614818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.130 [2024-12-06 19:21:39.614822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 [2024-12-06 19:21:39.761067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 Malloc0 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 [2024-12-06 19:21:39.821507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 [2024-12-06 19:21:39.829343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 Malloc1 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1173297 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1173297 /var/tmp/bdevperf.sock 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1173297 ']' 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.389 19:21:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.648 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.648 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:29.648 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:29.648 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.648 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.907 NVMe0n1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.907 1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.907 request: 00:23:29.907 { 00:23:29.907 "name": "NVMe0", 00:23:29.907 "trtype": "tcp", 00:23:29.907 "traddr": "10.0.0.2", 00:23:29.907 "adrfam": "ipv4", 00:23:29.907 "trsvcid": "4420", 00:23:29.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.907 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:29.907 "hostaddr": "10.0.0.1", 00:23:29.907 "prchk_reftag": false, 00:23:29.907 "prchk_guard": false, 00:23:29.907 "hdgst": false, 00:23:29.907 "ddgst": false, 00:23:29.907 "allow_unrecognized_csi": false, 00:23:29.907 "method": "bdev_nvme_attach_controller", 00:23:29.907 "req_id": 1 00:23:29.907 } 00:23:29.907 Got JSON-RPC error response 00:23:29.907 response: 00:23:29.907 { 00:23:29.907 "code": -114, 00:23:29.907 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.907 } 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.907 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.907 request: 00:23:29.907 { 00:23:29.908 "name": "NVMe0", 00:23:29.908 "trtype": "tcp", 00:23:29.908 "traddr": "10.0.0.2", 00:23:29.908 "adrfam": "ipv4", 00:23:29.908 "trsvcid": "4420", 00:23:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.908 "hostaddr": "10.0.0.1", 00:23:29.908 "prchk_reftag": false, 00:23:29.908 "prchk_guard": false, 00:23:29.908 "hdgst": false, 00:23:29.908 "ddgst": false, 00:23:29.908 "allow_unrecognized_csi": false, 00:23:29.908 "method": "bdev_nvme_attach_controller", 00:23:29.908 "req_id": 1 00:23:29.908 } 00:23:29.908 Got JSON-RPC error response 00:23:29.908 response: 00:23:29.908 { 00:23:29.908 "code": -114, 00:23:29.908 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.908 } 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.908 request: 00:23:29.908 { 00:23:29.908 "name": "NVMe0", 00:23:29.908 "trtype": "tcp", 00:23:29.908 "traddr": "10.0.0.2", 00:23:29.908 "adrfam": "ipv4", 00:23:29.908 "trsvcid": "4420", 00:23:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.908 "hostaddr": "10.0.0.1", 00:23:29.908 "prchk_reftag": false, 00:23:29.908 "prchk_guard": false, 00:23:29.908 "hdgst": false, 00:23:29.908 "ddgst": false, 00:23:29.908 "multipath": "disable", 00:23:29.908 "allow_unrecognized_csi": false, 00:23:29.908 "method": "bdev_nvme_attach_controller", 00:23:29.908 "req_id": 1 00:23:29.908 } 00:23:29.908 Got JSON-RPC error response 00:23:29.908 response: 00:23:29.908 { 00:23:29.908 "code": -114, 00:23:29.908 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:29.908 } 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.908 request: 00:23:29.908 { 00:23:29.908 "name": "NVMe0", 00:23:29.908 "trtype": "tcp", 00:23:29.908 "traddr": "10.0.0.2", 00:23:29.908 "adrfam": "ipv4", 00:23:29.908 "trsvcid": "4420", 00:23:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.908 "hostaddr": "10.0.0.1", 00:23:29.908 "prchk_reftag": false, 00:23:29.908 "prchk_guard": false, 00:23:29.908 "hdgst": false, 00:23:29.908 "ddgst": false, 00:23:29.908 "multipath": "failover", 00:23:29.908 "allow_unrecognized_csi": false, 00:23:29.908 "method": "bdev_nvme_attach_controller", 00:23:29.908 "req_id": 1 00:23:29.908 } 00:23:29.908 Got JSON-RPC error response 00:23:29.908 response: 00:23:29.908 { 00:23:29.908 "code": -114, 00:23:29.908 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.908 } 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.908 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.167 NVMe0n1 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.167 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:30.167 19:21:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.541 { 00:23:31.541 "results": [ 00:23:31.541 { 00:23:31.541 "job": "NVMe0n1", 00:23:31.541 "core_mask": "0x1", 00:23:31.541 "workload": "write", 00:23:31.541 "status": "finished", 00:23:31.541 "queue_depth": 128, 00:23:31.541 "io_size": 4096, 00:23:31.541 "runtime": 1.007427, 00:23:31.541 "iops": 18235.564462735267, 00:23:31.541 "mibps": 71.23267368255964, 00:23:31.541 "io_failed": 0, 00:23:31.541 "io_timeout": 0, 00:23:31.541 "avg_latency_us": 7002.062936391293, 00:23:31.541 "min_latency_us": 4660.337777777778, 00:23:31.541 "max_latency_us": 12815.92888888889 00:23:31.541 } 00:23:31.541 ], 00:23:31.541 "core_count": 1 00:23:31.541 } 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1173297 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1173297 ']' 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1173297 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173297 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173297' 00:23:31.541 killing process with pid 1173297 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1173297 00:23:31.541 19:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1173297 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:31.541 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:31.541 [2024-12-06 19:21:39.938574] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:31.541 [2024-12-06 19:21:39.938699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173297 ] 00:23:31.541 [2024-12-06 19:21:40.008589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.541 [2024-12-06 19:21:40.071083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.541 [2024-12-06 19:21:40.589897] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 4c843485-d729-46a2-beba-fb38d6ca72b4 already exists 00:23:31.541 [2024-12-06 19:21:40.589937] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:4c843485-d729-46a2-beba-fb38d6ca72b4 alias for bdev NVMe1n1 00:23:31.541 [2024-12-06 19:21:40.589971] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:31.541 Running I/O for 1 seconds... 00:23:31.541 18162.00 IOPS, 70.95 MiB/s 00:23:31.541 Latency(us) 00:23:31.541 [2024-12-06T18:21:42.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.541 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:31.541 NVMe0n1 : 1.01 18235.56 71.23 0.00 0.00 7002.06 4660.34 12815.93 00:23:31.541 [2024-12-06T18:21:42.118Z] =================================================================================================================== 00:23:31.541 [2024-12-06T18:21:42.118Z] Total : 18235.56 71.23 0.00 0.00 7002.06 4660.34 12815.93 00:23:31.541 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.541 00:23:31.541 Latency(us) 00:23:31.541 [2024-12-06T18:21:42.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.541 [2024-12-06T18:21:42.118Z] =================================================================================================================== 00:23:31.541 [2024-12-06T18:21:42.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.541 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.541 rmmod nvme_tcp 00:23:31.541 rmmod nvme_fabrics 00:23:31.541 rmmod nvme_keyring 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1173194 ']' 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1173194 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1173194 ']' 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1173194 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.541 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173194 00:23:31.799 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.799 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.799 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173194' 00:23:31.799 killing process with pid 1173194 00:23:31.799 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1173194 00:23:31.799 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1173194 00:23:32.058 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.058 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.058 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.059 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.963 00:23:33.963 real 0m7.476s 00:23:33.963 user 0m11.338s 00:23:33.963 sys 0m2.369s 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.963 ************************************ 00:23:33.963 END TEST nvmf_multicontroller 00:23:33.963 ************************************ 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.963 ************************************ 00:23:33.963 START TEST nvmf_aer 00:23:33.963 ************************************ 00:23:33.963 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:34.221 * Looking for test storage... 00:23:34.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.221 --rc genhtml_branch_coverage=1 00:23:34.221 --rc genhtml_function_coverage=1 00:23:34.221 --rc genhtml_legend=1 00:23:34.221 --rc geninfo_all_blocks=1 00:23:34.221 --rc geninfo_unexecuted_blocks=1 00:23:34.221 00:23:34.221 ' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.221 --rc genhtml_branch_coverage=1 00:23:34.221 --rc genhtml_function_coverage=1 00:23:34.221 --rc genhtml_legend=1 00:23:34.221 --rc geninfo_all_blocks=1 00:23:34.221 --rc geninfo_unexecuted_blocks=1 00:23:34.221 00:23:34.221 ' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.221 --rc genhtml_branch_coverage=1 00:23:34.221 --rc genhtml_function_coverage=1 00:23:34.221 --rc genhtml_legend=1 00:23:34.221 --rc geninfo_all_blocks=1 00:23:34.221 --rc geninfo_unexecuted_blocks=1 00:23:34.221 00:23:34.221 ' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.221 --rc genhtml_branch_coverage=1 00:23:34.221 --rc genhtml_function_coverage=1 00:23:34.221 --rc genhtml_legend=1 00:23:34.221 --rc geninfo_all_blocks=1 00:23:34.221 --rc geninfo_unexecuted_blocks=1 00:23:34.221 00:23:34.221 ' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.221 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.222 19:21:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.748 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:36.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:36.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:36.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:36.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.749 19:21:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:23:36.749 00:23:36.749 --- 10.0.0.2 ping statistics --- 00:23:36.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.749 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:23:36.749 00:23:36.749 --- 10.0.0.1 ping statistics --- 00:23:36.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.749 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.749 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1175553 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1175553 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1175553 ']' 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.750 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.750 [2024-12-06 19:21:47.100957] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:36.750 [2024-12-06 19:21:47.101044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.750 [2024-12-06 19:21:47.172001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.750 [2024-12-06 19:21:47.227391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.750 [2024-12-06 19:21:47.227456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.750 [2024-12-06 19:21:47.227484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.750 [2024-12-06 19:21:47.227495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.750 [2024-12-06 19:21:47.227505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.750 [2024-12-06 19:21:47.229002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.750 [2024-12-06 19:21:47.229063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.750 [2024-12-06 19:21:47.229126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.750 [2024-12-06 19:21:47.229129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 [2024-12-06 19:21:47.370232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 Malloc0 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 [2024-12-06 19:21:47.427514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.008 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.008 [ 00:23:37.008 { 00:23:37.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.008 "subtype": "Discovery", 00:23:37.008 "listen_addresses": [], 00:23:37.008 "allow_any_host": true, 00:23:37.008 "hosts": [] 00:23:37.008 }, 00:23:37.008 { 00:23:37.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.008 "subtype": "NVMe", 00:23:37.008 "listen_addresses": [ 00:23:37.008 { 00:23:37.009 "trtype": "TCP", 00:23:37.009 "adrfam": "IPv4", 00:23:37.009 "traddr": "10.0.0.2", 00:23:37.009 "trsvcid": "4420" 00:23:37.009 } 00:23:37.009 ], 00:23:37.009 "allow_any_host": true, 00:23:37.009 "hosts": [], 00:23:37.009 "serial_number": "SPDK00000000000001", 00:23:37.009 "model_number": "SPDK bdev Controller", 00:23:37.009 "max_namespaces": 2, 00:23:37.009 "min_cntlid": 1, 00:23:37.009 "max_cntlid": 65519, 00:23:37.009 "namespaces": [ 00:23:37.009 { 00:23:37.009 "nsid": 1, 00:23:37.009 "bdev_name": "Malloc0", 00:23:37.009 "name": "Malloc0", 00:23:37.009 "nguid": "0760B19A24F145FB8253F23EEA27ACA6", 00:23:37.009 "uuid": "0760b19a-24f1-45fb-8253-f23eea27aca6" 00:23:37.009 } 00:23:37.009 ] 00:23:37.009 } 00:23:37.009 ] 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1175587 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:37.009 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 Malloc1 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 [ 00:23:37.268 { 00:23:37.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.268 "subtype": "Discovery", 00:23:37.268 "listen_addresses": [], 00:23:37.268 "allow_any_host": true, 00:23:37.268 "hosts": [] 00:23:37.268 }, 00:23:37.268 { 00:23:37.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.268 "subtype": "NVMe", 00:23:37.268 "listen_addresses": [ 00:23:37.268 { 00:23:37.268 "trtype": "TCP", 00:23:37.268 "adrfam": "IPv4", 00:23:37.268 "traddr": "10.0.0.2", 00:23:37.268 "trsvcid": "4420" 00:23:37.268 } 00:23:37.268 ], 00:23:37.268 "allow_any_host": true, 00:23:37.268 "hosts": [], 00:23:37.268 "serial_number": "SPDK00000000000001", 00:23:37.268 "model_number": "SPDK bdev Controller", 00:23:37.268 "max_namespaces": 2, 00:23:37.268 "min_cntlid": 1, 00:23:37.268 "max_cntlid": 65519, 00:23:37.268 "namespaces": [ 00:23:37.268 { 00:23:37.268 "nsid": 1, 00:23:37.268 "bdev_name": "Malloc0", 00:23:37.268 "name": "Malloc0", 00:23:37.268 "nguid": "0760B19A24F145FB8253F23EEA27ACA6", 00:23:37.268 "uuid": "0760b19a-24f1-45fb-8253-f23eea27aca6" 00:23:37.268 }, 00:23:37.268 { 00:23:37.268 "nsid": 2, 00:23:37.268 "bdev_name": "Malloc1", 00:23:37.268 "name": "Malloc1", 00:23:37.268 "nguid": "8131AB6D2E124AE68B69FD4A076C9ACA", 00:23:37.268 "uuid": "8131ab6d-2e12-4ae6-8b69-fd4a076c9aca" 00:23:37.268 } 00:23:37.268 ] 00:23:37.268 } 00:23:37.268 ] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1175587 00:23:37.268 Asynchronous Event Request test 00:23:37.268 Attaching to 10.0.0.2 00:23:37.268 Attached to 10.0.0.2 00:23:37.268 Registering asynchronous event callbacks... 00:23:37.268 Starting namespace attribute notice tests for all controllers... 00:23:37.268 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:37.268 aer_cb - Changed Namespace 00:23:37.268 Cleaning up... 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:37.268 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.269 rmmod nvme_tcp 00:23:37.269 rmmod nvme_fabrics 00:23:37.269 rmmod nvme_keyring 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1175553 ']' 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1175553 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1175553 ']' 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1175553 00:23:37.269 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1175553 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1175553' 00:23:37.528 killing process with pid 1175553 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1175553 00:23:37.528 19:21:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1175553 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.528 19:21:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.067 00:23:40.067 real 0m5.652s 00:23:40.067 user 0m4.263s 00:23:40.067 sys 0m2.091s 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.067 ************************************ 00:23:40.067 END TEST nvmf_aer 00:23:40.067 ************************************ 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.067 ************************************ 00:23:40.067 START TEST nvmf_async_init 00:23:40.067 ************************************ 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.067 * Looking for test storage... 00:23:40.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.067 --rc genhtml_branch_coverage=1 00:23:40.067 --rc genhtml_function_coverage=1 00:23:40.067 --rc genhtml_legend=1 00:23:40.067 --rc geninfo_all_blocks=1 00:23:40.067 --rc geninfo_unexecuted_blocks=1 00:23:40.067 00:23:40.067 ' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.067 --rc genhtml_branch_coverage=1 00:23:40.067 --rc genhtml_function_coverage=1 00:23:40.067 --rc genhtml_legend=1 00:23:40.067 --rc geninfo_all_blocks=1 00:23:40.067 --rc geninfo_unexecuted_blocks=1 00:23:40.067 00:23:40.067 ' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.067 --rc genhtml_branch_coverage=1 00:23:40.067 --rc genhtml_function_coverage=1 00:23:40.067 --rc genhtml_legend=1 00:23:40.067 --rc geninfo_all_blocks=1 00:23:40.067 --rc geninfo_unexecuted_blocks=1 00:23:40.067 00:23:40.067 ' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.067 --rc genhtml_branch_coverage=1 00:23:40.067 --rc genhtml_function_coverage=1 00:23:40.067 --rc genhtml_legend=1 00:23:40.067 --rc geninfo_all_blocks=1 00:23:40.067 --rc geninfo_unexecuted_blocks=1 00:23:40.067 00:23:40.067 ' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.067 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dcf3b54617914d81b554f1e26fc033f9 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.068 19:21:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.972 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:42.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:42.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:42.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:42.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.232 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:23:42.233 00:23:42.233 --- 10.0.0.2 ping statistics --- 00:23:42.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.233 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:23:42.233 00:23:42.233 --- 10.0.0.1 ping statistics --- 00:23:42.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.233 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1177651 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1177651 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1177651 ']' 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.233 19:21:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.233 [2024-12-06 19:21:52.781699] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:42.233 [2024-12-06 19:21:52.781788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.491 [2024-12-06 19:21:52.852658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.491 [2024-12-06 19:21:52.908878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.491 [2024-12-06 19:21:52.908933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.491 [2024-12-06 19:21:52.908962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.491 [2024-12-06 19:21:52.908973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.491 [2024-12-06 19:21:52.908983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.491 [2024-12-06 19:21:52.909582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.491 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.492 [2024-12-06 19:21:53.056337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.492 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.750 null0 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dcf3b54617914d81b554f1e26fc033f9 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.750 [2024-12-06 19:21:53.096628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.750 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.009 nvme0n1 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.009 [ 00:23:43.009 { 00:23:43.009 "name": "nvme0n1", 00:23:43.009 "aliases": [ 00:23:43.009 "dcf3b546-1791-4d81-b554-f1e26fc033f9" 00:23:43.009 ], 00:23:43.009 "product_name": "NVMe disk", 00:23:43.009 "block_size": 512, 00:23:43.009 "num_blocks": 2097152, 00:23:43.009 "uuid": "dcf3b546-1791-4d81-b554-f1e26fc033f9", 00:23:43.009 "numa_id": 0, 00:23:43.009 "assigned_rate_limits": { 00:23:43.009 "rw_ios_per_sec": 0, 00:23:43.009 "rw_mbytes_per_sec": 0, 00:23:43.009 "r_mbytes_per_sec": 0, 00:23:43.009 "w_mbytes_per_sec": 0 00:23:43.009 }, 00:23:43.009 "claimed": false, 00:23:43.009 "zoned": false, 00:23:43.009 "supported_io_types": { 00:23:43.009 "read": true, 00:23:43.009 "write": true, 00:23:43.009 "unmap": false, 00:23:43.009 "flush": true, 00:23:43.009 "reset": true, 00:23:43.009 "nvme_admin": true, 00:23:43.009 "nvme_io": true, 00:23:43.009 "nvme_io_md": false, 00:23:43.009 "write_zeroes": true, 00:23:43.009 "zcopy": false, 00:23:43.009 "get_zone_info": false, 00:23:43.009 "zone_management": false, 00:23:43.009 "zone_append": false, 00:23:43.009 "compare": true, 00:23:43.009 "compare_and_write": true, 00:23:43.009 "abort": true, 00:23:43.009 "seek_hole": false, 00:23:43.009 "seek_data": false, 00:23:43.009 "copy": true, 00:23:43.009 "nvme_iov_md": false 00:23:43.009 }, 00:23:43.009 "memory_domains": [ 00:23:43.009 { 00:23:43.009 "dma_device_id": "system", 00:23:43.009 "dma_device_type": 1 00:23:43.009 } 00:23:43.009 ], 00:23:43.009 "driver_specific": { 00:23:43.009 "nvme": [ 00:23:43.009 { 00:23:43.009 "trid": { 00:23:43.009 "trtype": "TCP", 00:23:43.009 "adrfam": "IPv4", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.009 }, 00:23:43.009 "ctrlr_data": { 00:23:43.009 "cntlid": 1, 00:23:43.009 "vendor_id": "0x8086", 00:23:43.009 "model_number": "SPDK bdev Controller", 00:23:43.009 "serial_number": "00000000000000000000", 00:23:43.009 "firmware_revision": "25.01", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.009 "oacs": { 00:23:43.009 "security": 0, 00:23:43.009 "format": 0, 00:23:43.009 "firmware": 0, 00:23:43.009 "ns_manage": 0 00:23:43.009 }, 00:23:43.009 "multi_ctrlr": true, 00:23:43.009 "ana_reporting": false 00:23:43.009 }, 00:23:43.009 "vs": { 00:23:43.009 "nvme_version": "1.3" 00:23:43.009 }, 00:23:43.009 "ns_data": { 00:23:43.009 "id": 1, 00:23:43.009 "can_share": true 00:23:43.009 } 00:23:43.009 } 00:23:43.009 ], 00:23:43.009 "mp_policy": "active_passive" 00:23:43.009 } 00:23:43.009 } 00:23:43.009 ] 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.009 [2024-12-06 19:21:53.346225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.009 [2024-12-06 19:21:53.346325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c8740 (9): Bad file descriptor 00:23:43.009 [2024-12-06 19:21:53.478806] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.009 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.009 [ 00:23:43.009 { 00:23:43.009 "name": "nvme0n1", 00:23:43.009 "aliases": [ 00:23:43.009 "dcf3b546-1791-4d81-b554-f1e26fc033f9" 00:23:43.009 ], 00:23:43.009 "product_name": "NVMe disk", 00:23:43.009 "block_size": 512, 00:23:43.009 "num_blocks": 2097152, 00:23:43.009 "uuid": "dcf3b546-1791-4d81-b554-f1e26fc033f9", 00:23:43.009 "numa_id": 0, 00:23:43.009 "assigned_rate_limits": { 00:23:43.009 "rw_ios_per_sec": 0, 00:23:43.009 "rw_mbytes_per_sec": 0, 00:23:43.009 "r_mbytes_per_sec": 0, 00:23:43.009 "w_mbytes_per_sec": 0 00:23:43.009 }, 00:23:43.009 "claimed": false, 00:23:43.009 "zoned": false, 00:23:43.010 "supported_io_types": { 00:23:43.010 "read": true, 00:23:43.010 "write": true, 00:23:43.010 "unmap": false, 00:23:43.010 "flush": true, 00:23:43.010 "reset": true, 00:23:43.010 "nvme_admin": true, 00:23:43.010 "nvme_io": true, 00:23:43.010 "nvme_io_md": false, 00:23:43.010 "write_zeroes": true, 00:23:43.010 "zcopy": false, 00:23:43.010 "get_zone_info": false, 00:23:43.010 "zone_management": false, 00:23:43.010 "zone_append": false, 00:23:43.010 "compare": true, 00:23:43.010 "compare_and_write": true, 00:23:43.010 "abort": true, 00:23:43.010 "seek_hole": false, 00:23:43.010 "seek_data": false, 00:23:43.010 "copy": true, 00:23:43.010 "nvme_iov_md": false 00:23:43.010 }, 00:23:43.010 "memory_domains": [ 00:23:43.010 { 00:23:43.010 "dma_device_id": "system", 00:23:43.010 "dma_device_type": 1 00:23:43.010 } 00:23:43.010 ], 00:23:43.010 "driver_specific": { 00:23:43.010 "nvme": [ 00:23:43.010 { 00:23:43.010 "trid": { 00:23:43.010 "trtype": "TCP", 00:23:43.010 "adrfam": "IPv4", 00:23:43.010 "traddr": "10.0.0.2", 00:23:43.010 "trsvcid": "4420", 00:23:43.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.010 }, 00:23:43.010 "ctrlr_data": { 00:23:43.010 "cntlid": 2, 00:23:43.010 "vendor_id": "0x8086", 00:23:43.010 "model_number": "SPDK bdev Controller", 00:23:43.010 "serial_number": "00000000000000000000", 00:23:43.010 "firmware_revision": "25.01", 00:23:43.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.010 "oacs": { 00:23:43.010 "security": 0, 00:23:43.010 "format": 0, 00:23:43.010 "firmware": 0, 00:23:43.010 "ns_manage": 0 00:23:43.010 }, 00:23:43.010 "multi_ctrlr": true, 00:23:43.010 "ana_reporting": false 00:23:43.010 }, 00:23:43.010 "vs": { 00:23:43.010 "nvme_version": "1.3" 00:23:43.010 }, 00:23:43.010 "ns_data": { 00:23:43.010 "id": 1, 00:23:43.010 "can_share": true 00:23:43.010 } 00:23:43.010 } 00:23:43.010 ], 00:23:43.010 "mp_policy": "active_passive" 00:23:43.010 } 00:23:43.010 } 00:23:43.010 ] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G4aZoQ1qag 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G4aZoQ1qag 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.G4aZoQ1qag 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 [2024-12-06 19:21:53.534852] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.010 [2024-12-06 19:21:53.535005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.010 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.010 [2024-12-06 19:21:53.550885] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.268 nvme0n1 00:23:43.268 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.268 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.268 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.268 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.268 [ 00:23:43.268 { 00:23:43.268 "name": "nvme0n1", 00:23:43.268 "aliases": [ 00:23:43.268 "dcf3b546-1791-4d81-b554-f1e26fc033f9" 00:23:43.268 ], 00:23:43.269 "product_name": "NVMe disk", 00:23:43.269 "block_size": 512, 00:23:43.269 "num_blocks": 2097152, 00:23:43.269 "uuid": "dcf3b546-1791-4d81-b554-f1e26fc033f9", 00:23:43.269 "numa_id": 0, 00:23:43.269 "assigned_rate_limits": { 00:23:43.269 "rw_ios_per_sec": 0, 00:23:43.269 "rw_mbytes_per_sec": 0, 00:23:43.269 "r_mbytes_per_sec": 0, 00:23:43.269 "w_mbytes_per_sec": 0 00:23:43.269 }, 00:23:43.269 "claimed": false, 00:23:43.269 "zoned": false, 00:23:43.269 "supported_io_types": { 00:23:43.269 "read": true, 00:23:43.269 "write": true, 00:23:43.269 "unmap": false, 00:23:43.269 "flush": true, 00:23:43.269 "reset": true, 00:23:43.269 "nvme_admin": true, 00:23:43.269 "nvme_io": true, 00:23:43.269 "nvme_io_md": false, 00:23:43.269 "write_zeroes": true, 00:23:43.269 "zcopy": false, 00:23:43.269 "get_zone_info": false, 00:23:43.269 "zone_management": false, 00:23:43.269 "zone_append": false, 00:23:43.269 "compare": true, 00:23:43.269 "compare_and_write": true, 00:23:43.269 "abort": true, 00:23:43.269 "seek_hole": false, 00:23:43.269 "seek_data": false, 00:23:43.269 "copy": true, 00:23:43.269 "nvme_iov_md": false 00:23:43.269 }, 00:23:43.269 "memory_domains": [ 00:23:43.269 { 00:23:43.269 "dma_device_id": "system", 00:23:43.269 "dma_device_type": 1 00:23:43.269 } 00:23:43.269 ], 00:23:43.269 "driver_specific": { 00:23:43.269 "nvme": [ 00:23:43.269 { 00:23:43.269 "trid": { 00:23:43.269 "trtype": "TCP", 00:23:43.269 "adrfam": "IPv4", 00:23:43.269 "traddr": "10.0.0.2", 00:23:43.269 "trsvcid": "4421", 00:23:43.269 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.269 }, 00:23:43.269 "ctrlr_data": { 00:23:43.269 "cntlid": 3, 00:23:43.269 "vendor_id": "0x8086", 00:23:43.269 "model_number": "SPDK bdev Controller", 00:23:43.269 "serial_number": "00000000000000000000", 00:23:43.269 "firmware_revision": "25.01", 00:23:43.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.269 "oacs": { 00:23:43.269 "security": 0, 00:23:43.269 "format": 0, 00:23:43.269 "firmware": 0, 00:23:43.269 "ns_manage": 0 00:23:43.269 }, 00:23:43.269 "multi_ctrlr": true, 00:23:43.269 "ana_reporting": false 00:23:43.269 }, 00:23:43.269 "vs": { 00:23:43.269 "nvme_version": "1.3" 00:23:43.269 }, 00:23:43.269 "ns_data": { 00:23:43.269 "id": 1, 00:23:43.269 "can_share": true 00:23:43.269 } 00:23:43.269 } 00:23:43.269 ], 00:23:43.269 "mp_policy": "active_passive" 00:23:43.269 } 00:23:43.269 } 00:23:43.269 ] 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.G4aZoQ1qag 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.269 rmmod nvme_tcp 00:23:43.269 rmmod nvme_fabrics 00:23:43.269 rmmod nvme_keyring 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1177651 ']' 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1177651 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1177651 ']' 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1177651 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1177651 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1177651' 00:23:43.269 killing process with pid 1177651 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1177651 00:23:43.269 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1177651 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.528 19:21:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.433 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.433 00:23:45.433 real 0m5.799s 00:23:45.433 user 0m2.219s 00:23:45.433 sys 0m1.986s 00:23:45.433 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.433 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.433 ************************************ 00:23:45.433 END TEST nvmf_async_init 00:23:45.433 ************************************ 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.693 ************************************ 00:23:45.693 START TEST dma 00:23:45.693 ************************************ 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:45.693 * Looking for test storage... 00:23:45.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.693 --rc genhtml_branch_coverage=1 00:23:45.693 --rc genhtml_function_coverage=1 00:23:45.693 --rc genhtml_legend=1 00:23:45.693 --rc geninfo_all_blocks=1 00:23:45.693 --rc geninfo_unexecuted_blocks=1 00:23:45.693 00:23:45.693 ' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.693 --rc genhtml_branch_coverage=1 00:23:45.693 --rc genhtml_function_coverage=1 00:23:45.693 --rc genhtml_legend=1 00:23:45.693 --rc geninfo_all_blocks=1 00:23:45.693 --rc geninfo_unexecuted_blocks=1 00:23:45.693 00:23:45.693 ' 00:23:45.693 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.693 --rc genhtml_branch_coverage=1 00:23:45.693 --rc genhtml_function_coverage=1 00:23:45.693 --rc genhtml_legend=1 00:23:45.693 --rc geninfo_all_blocks=1 00:23:45.693 --rc geninfo_unexecuted_blocks=1 00:23:45.694 00:23:45.694 ' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.694 --rc genhtml_branch_coverage=1 00:23:45.694 --rc genhtml_function_coverage=1 00:23:45.694 --rc genhtml_legend=1 00:23:45.694 --rc geninfo_all_blocks=1 00:23:45.694 --rc geninfo_unexecuted_blocks=1 00:23:45.694 00:23:45.694 ' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:45.694 00:23:45.694 real 0m0.169s 00:23:45.694 user 0m0.112s 00:23:45.694 sys 0m0.066s 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:45.694 ************************************ 00:23:45.694 END TEST dma 00:23:45.694 ************************************ 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.694 ************************************ 00:23:45.694 START TEST nvmf_identify 00:23:45.694 ************************************ 00:23:45.694 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:45.952 * Looking for test storage... 00:23:45.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.952 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.953 --rc genhtml_branch_coverage=1 00:23:45.953 --rc genhtml_function_coverage=1 00:23:45.953 --rc genhtml_legend=1 00:23:45.953 --rc geninfo_all_blocks=1 00:23:45.953 --rc geninfo_unexecuted_blocks=1 00:23:45.953 00:23:45.953 ' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.953 --rc genhtml_branch_coverage=1 00:23:45.953 --rc genhtml_function_coverage=1 00:23:45.953 --rc genhtml_legend=1 00:23:45.953 --rc geninfo_all_blocks=1 00:23:45.953 --rc geninfo_unexecuted_blocks=1 00:23:45.953 00:23:45.953 ' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.953 --rc genhtml_branch_coverage=1 00:23:45.953 --rc genhtml_function_coverage=1 00:23:45.953 --rc genhtml_legend=1 00:23:45.953 --rc geninfo_all_blocks=1 00:23:45.953 --rc geninfo_unexecuted_blocks=1 00:23:45.953 00:23:45.953 ' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.953 --rc genhtml_branch_coverage=1 00:23:45.953 --rc genhtml_function_coverage=1 00:23:45.953 --rc genhtml_legend=1 00:23:45.953 --rc geninfo_all_blocks=1 00:23:45.953 --rc geninfo_unexecuted_blocks=1 00:23:45.953 00:23:45.953 ' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.953 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.954 19:21:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:48.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:48.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.485 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:48.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:48.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:48.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:48.486 00:23:48.486 --- 10.0.0.2 ping statistics --- 00:23:48.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.486 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:23:48.486 00:23:48.486 --- 10.0.0.1 ping statistics --- 00:23:48.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.486 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1179794 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1179794 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1179794 ']' 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.486 19:21:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.486 [2024-12-06 19:21:58.786863] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:48.486 [2024-12-06 19:21:58.786963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.486 [2024-12-06 19:21:58.861790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.486 [2024-12-06 19:21:58.923995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.486 [2024-12-06 19:21:58.924058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.486 [2024-12-06 19:21:58.924087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.486 [2024-12-06 19:21:58.924098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.486 [2024-12-06 19:21:58.924108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.486 [2024-12-06 19:21:58.925832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.486 [2024-12-06 19:21:58.925871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.486 [2024-12-06 19:21:58.925971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.486 [2024-12-06 19:21:58.925967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.486 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.486 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:48.486 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.486 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.486 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.486 [2024-12-06 19:21:59.057149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.746 Malloc0 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.746 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 [2024-12-06 19:21:59.155154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 [ 00:23:48.747 { 00:23:48.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:48.747 "subtype": "Discovery", 00:23:48.747 "listen_addresses": [ 00:23:48.747 { 00:23:48.747 "trtype": "TCP", 00:23:48.747 "adrfam": "IPv4", 00:23:48.747 "traddr": "10.0.0.2", 00:23:48.747 "trsvcid": "4420" 00:23:48.747 } 00:23:48.747 ], 00:23:48.747 "allow_any_host": true, 00:23:48.747 "hosts": [] 00:23:48.747 }, 00:23:48.747 { 00:23:48.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.747 "subtype": "NVMe", 00:23:48.747 "listen_addresses": [ 00:23:48.747 { 00:23:48.747 "trtype": "TCP", 00:23:48.747 "adrfam": "IPv4", 00:23:48.747 "traddr": "10.0.0.2", 00:23:48.747 "trsvcid": "4420" 00:23:48.747 } 00:23:48.747 ], 00:23:48.747 "allow_any_host": true, 00:23:48.747 "hosts": [], 00:23:48.747 "serial_number": "SPDK00000000000001", 00:23:48.747 "model_number": "SPDK bdev Controller", 00:23:48.747 "max_namespaces": 32, 00:23:48.747 "min_cntlid": 1, 00:23:48.747 "max_cntlid": 65519, 00:23:48.747 "namespaces": [ 00:23:48.747 { 00:23:48.747 "nsid": 1, 00:23:48.747 "bdev_name": "Malloc0", 00:23:48.747 "name": "Malloc0", 00:23:48.747 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:48.747 "eui64": "ABCDEF0123456789", 00:23:48.747 "uuid": "b6f943d7-fed0-4152-a403-ecc85746acc1" 00:23:48.747 } 00:23:48.747 ] 00:23:48.747 } 00:23:48.747 ] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.747 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:48.747 [2024-12-06 19:21:59.198451] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:48.747 [2024-12-06 19:21:59.198502] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179819 ] 00:23:48.747 [2024-12-06 19:21:59.249061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:48.747 [2024-12-06 19:21:59.249127] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:48.747 [2024-12-06 19:21:59.249137] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:48.747 [2024-12-06 19:21:59.249158] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:48.747 [2024-12-06 19:21:59.249172] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:48.747 [2024-12-06 19:21:59.253219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:48.747 [2024-12-06 19:21:59.253293] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x192e690 0 00:23:48.747 [2024-12-06 19:21:59.253449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:48.747 [2024-12-06 19:21:59.253469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:48.747 [2024-12-06 19:21:59.253478] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:48.747 [2024-12-06 19:21:59.253484] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:48.747 [2024-12-06 19:21:59.253533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.253547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.253555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.747 [2024-12-06 19:21:59.253575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:48.747 [2024-12-06 19:21:59.253602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.747 [2024-12-06 19:21:59.260679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.747 [2024-12-06 19:21:59.260698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.747 [2024-12-06 19:21:59.260706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.260713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.747 [2024-12-06 19:21:59.260735] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:48.747 [2024-12-06 19:21:59.260748] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:48.747 [2024-12-06 19:21:59.260759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:48.747 [2024-12-06 19:21:59.260786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.260795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.260802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.747 [2024-12-06 19:21:59.260814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.747 [2024-12-06 19:21:59.260838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.747 [2024-12-06 19:21:59.260976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.747 [2024-12-06 19:21:59.260988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.747 [2024-12-06 19:21:59.260995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.747 [2024-12-06 19:21:59.261017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:48.747 [2024-12-06 19:21:59.261037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:48.747 [2024-12-06 19:21:59.261051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.747 [2024-12-06 19:21:59.261075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.747 [2024-12-06 19:21:59.261097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.747 [2024-12-06 19:21:59.261186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.747 [2024-12-06 19:21:59.261200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.747 [2024-12-06 19:21:59.261208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.747 [2024-12-06 19:21:59.261224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:48.747 [2024-12-06 19:21:59.261239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:48.747 [2024-12-06 19:21:59.261251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.747 [2024-12-06 19:21:59.261276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.747 [2024-12-06 19:21:59.261297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.747 [2024-12-06 19:21:59.261376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.747 [2024-12-06 19:21:59.261388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.747 [2024-12-06 19:21:59.261395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.747 [2024-12-06 19:21:59.261411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:48.747 [2024-12-06 19:21:59.261428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.747 [2024-12-06 19:21:59.261443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.747 [2024-12-06 19:21:59.261453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.747 [2024-12-06 19:21:59.261474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.261559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.261571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.261578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.748 [2024-12-06 19:21:59.261593] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:48.748 [2024-12-06 19:21:59.261602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:48.748 [2024-12-06 19:21:59.261620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:48.748 [2024-12-06 19:21:59.261731] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:48.748 [2024-12-06 19:21:59.261742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:48.748 [2024-12-06 19:21:59.261759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.261783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.748 [2024-12-06 19:21:59.261806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.261924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.261936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.261944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.748 [2024-12-06 19:21:59.261959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:48.748 [2024-12-06 19:21:59.261975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.261990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.262001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.748 [2024-12-06 19:21:59.262022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.262105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.262119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.262126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.262133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.748 [2024-12-06 19:21:59.262140] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:48.748 [2024-12-06 19:21:59.262148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.262162] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:48.748 [2024-12-06 19:21:59.262177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.262195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.262203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.262214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.748 [2024-12-06 19:21:59.262235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.262386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.748 [2024-12-06 19:21:59.262405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.748 [2024-12-06 19:21:59.262414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.262421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192e690): datao=0, datal=4096, cccid=0 00:23:48.748 [2024-12-06 19:21:59.262429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1990100) on tqpair(0x192e690): expected_datao=0, payload_size=4096 00:23:48.748 [2024-12-06 19:21:59.262436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.262455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.262466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.305677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.305696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.305703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.305710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.748 [2024-12-06 19:21:59.305724] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:48.748 [2024-12-06 19:21:59.305732] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:48.748 [2024-12-06 19:21:59.305740] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:48.748 [2024-12-06 19:21:59.305749] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:48.748 [2024-12-06 19:21:59.305757] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:48.748 [2024-12-06 19:21:59.305764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.305779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.305792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.305800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.305806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.305817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.748 [2024-12-06 19:21:59.305840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.305968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.305981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.305988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.305995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:48.748 [2024-12-06 19:21:59.306009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.306032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.748 [2024-12-06 19:21:59.306042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.306069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.748 [2024-12-06 19:21:59.306079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.306101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.748 [2024-12-06 19:21:59.306110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.306132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.748 [2024-12-06 19:21:59.306141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.306177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:48.748 [2024-12-06 19:21:59.306190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192e690) 00:23:48.748 [2024-12-06 19:21:59.306208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.748 [2024-12-06 19:21:59.306245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990100, cid 0, qid 0 00:23:48.748 [2024-12-06 19:21:59.306256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990280, cid 1, qid 0 00:23:48.748 [2024-12-06 19:21:59.306263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990400, cid 2, qid 0 00:23:48.748 [2024-12-06 19:21:59.306270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:48.748 [2024-12-06 19:21:59.306292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990700, cid 4, qid 0 00:23:48.748 [2024-12-06 19:21:59.306458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.748 [2024-12-06 19:21:59.306470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.748 [2024-12-06 19:21:59.306477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.748 [2024-12-06 19:21:59.306483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990700) on tqpair=0x192e690 00:23:48.749 [2024-12-06 19:21:59.306493] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:48.749 [2024-12-06 19:21:59.306502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:48.749 [2024-12-06 19:21:59.306520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192e690) 00:23:48.749 [2024-12-06 19:21:59.306540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.749 [2024-12-06 19:21:59.306562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990700, cid 4, qid 0 00:23:48.749 [2024-12-06 19:21:59.306662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.749 [2024-12-06 19:21:59.306687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.749 [2024-12-06 19:21:59.306695] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306701] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192e690): datao=0, datal=4096, cccid=4 00:23:48.749 [2024-12-06 19:21:59.306713] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1990700) on tqpair(0x192e690): expected_datao=0, payload_size=4096 00:23:48.749 [2024-12-06 19:21:59.306721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306738] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.749 [2024-12-06 19:21:59.306760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.749 [2024-12-06 19:21:59.306767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990700) on tqpair=0x192e690 00:23:48.749 [2024-12-06 19:21:59.306794] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:48.749 [2024-12-06 19:21:59.306836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192e690) 00:23:48.749 [2024-12-06 19:21:59.306858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.749 [2024-12-06 19:21:59.306870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.306883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192e690) 00:23:48.749 [2024-12-06 19:21:59.306892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.749 [2024-12-06 19:21:59.306920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990700, cid 4, qid 0 00:23:48.749 [2024-12-06 19:21:59.306931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990880, cid 5, qid 0 00:23:48.749 [2024-12-06 19:21:59.307108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.749 [2024-12-06 19:21:59.307120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.749 [2024-12-06 19:21:59.307127] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.307134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192e690): datao=0, datal=1024, cccid=4 00:23:48.749 [2024-12-06 19:21:59.307141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1990700) on tqpair(0x192e690): expected_datao=0, payload_size=1024 00:23:48.749 [2024-12-06 19:21:59.307148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.307158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.307165] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.307174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.749 [2024-12-06 19:21:59.307183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.749 [2024-12-06 19:21:59.307204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.749 [2024-12-06 19:21:59.307211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990880) on tqpair=0x192e690 00:23:49.011 [2024-12-06 19:21:59.350679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.011 [2024-12-06 19:21:59.350701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.011 [2024-12-06 19:21:59.350709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.350716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990700) on tqpair=0x192e690 00:23:49.011 [2024-12-06 19:21:59.350736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.350746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192e690) 00:23:49.011 [2024-12-06 19:21:59.350763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.011 [2024-12-06 19:21:59.350796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990700, cid 4, qid 0 00:23:49.011 [2024-12-06 19:21:59.350939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.011 [2024-12-06 19:21:59.350954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.011 [2024-12-06 19:21:59.350962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.350968] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192e690): datao=0, datal=3072, cccid=4 00:23:49.011 [2024-12-06 19:21:59.350976] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1990700) on tqpair(0x192e690): expected_datao=0, payload_size=3072 00:23:49.011 [2024-12-06 19:21:59.350983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.350993] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.011 [2024-12-06 19:21:59.351023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.011 [2024-12-06 19:21:59.351030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990700) on tqpair=0x192e690 00:23:49.011 [2024-12-06 19:21:59.351052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192e690) 00:23:49.011 [2024-12-06 19:21:59.351072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.011 [2024-12-06 19:21:59.351101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990700, cid 4, qid 0 00:23:49.011 [2024-12-06 19:21:59.351202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.011 [2024-12-06 19:21:59.351214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.011 [2024-12-06 19:21:59.351221] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192e690): datao=0, datal=8, cccid=4 00:23:49.011 [2024-12-06 19:21:59.351235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1990700) on tqpair(0x192e690): expected_datao=0, payload_size=8 00:23:49.011 [2024-12-06 19:21:59.351242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351252] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.351259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.394679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.011 [2024-12-06 19:21:59.394698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.011 [2024-12-06 19:21:59.394705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.011 [2024-12-06 19:21:59.394727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990700) on tqpair=0x192e690 00:23:49.011 ===================================================== 00:23:49.011 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:49.011 ===================================================== 00:23:49.011 Controller Capabilities/Features 00:23:49.011 ================================ 00:23:49.011 Vendor ID: 0000 00:23:49.011 Subsystem Vendor ID: 0000 00:23:49.011 Serial Number: .................... 00:23:49.011 Model Number: ........................................ 00:23:49.011 Firmware Version: 25.01 00:23:49.011 Recommended Arb Burst: 0 00:23:49.011 IEEE OUI Identifier: 00 00 00 00:23:49.011 Multi-path I/O 00:23:49.011 May have multiple subsystem ports: No 00:23:49.011 May have multiple controllers: No 00:23:49.011 Associated with SR-IOV VF: No 00:23:49.011 Max Data Transfer Size: 131072 00:23:49.011 Max Number of Namespaces: 0 00:23:49.011 Max Number of I/O Queues: 1024 00:23:49.011 NVMe Specification Version (VS): 1.3 00:23:49.011 NVMe Specification Version (Identify): 1.3 00:23:49.011 Maximum Queue Entries: 128 00:23:49.011 Contiguous Queues Required: Yes 00:23:49.011 Arbitration Mechanisms Supported 00:23:49.011 Weighted Round Robin: Not Supported 00:23:49.011 Vendor Specific: Not Supported 00:23:49.011 Reset Timeout: 15000 ms 00:23:49.011 Doorbell Stride: 4 bytes 00:23:49.011 NVM Subsystem Reset: Not Supported 00:23:49.011 Command Sets Supported 00:23:49.011 NVM Command Set: Supported 00:23:49.011 Boot Partition: Not Supported 00:23:49.011 Memory Page Size Minimum: 4096 bytes 00:23:49.011 Memory Page Size Maximum: 4096 bytes 00:23:49.011 Persistent Memory Region: Not Supported 00:23:49.011 Optional Asynchronous Events Supported 00:23:49.011 Namespace Attribute Notices: Not Supported 00:23:49.011 Firmware Activation Notices: Not Supported 00:23:49.011 ANA Change Notices: Not Supported 00:23:49.011 PLE Aggregate Log Change Notices: Not Supported 00:23:49.011 LBA Status Info Alert Notices: Not Supported 00:23:49.011 EGE Aggregate Log Change Notices: Not Supported 00:23:49.011 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.011 Zone Descriptor Change Notices: Not Supported 00:23:49.011 Discovery Log Change Notices: Supported 00:23:49.011 Controller Attributes 00:23:49.011 128-bit Host Identifier: Not Supported 00:23:49.011 Non-Operational Permissive Mode: Not Supported 00:23:49.011 NVM Sets: Not Supported 00:23:49.011 Read Recovery Levels: Not Supported 00:23:49.011 Endurance Groups: Not Supported 00:23:49.011 Predictable Latency Mode: Not Supported 00:23:49.011 Traffic Based Keep ALive: Not Supported 00:23:49.011 Namespace Granularity: Not Supported 00:23:49.011 SQ Associations: Not Supported 00:23:49.011 UUID List: Not Supported 00:23:49.011 Multi-Domain Subsystem: Not Supported 00:23:49.011 Fixed Capacity Management: Not Supported 00:23:49.011 Variable Capacity Management: Not Supported 00:23:49.011 Delete Endurance Group: Not Supported 00:23:49.011 Delete NVM Set: Not Supported 00:23:49.011 Extended LBA Formats Supported: Not Supported 00:23:49.011 Flexible Data Placement Supported: Not Supported 00:23:49.011 00:23:49.011 Controller Memory Buffer Support 00:23:49.011 ================================ 00:23:49.011 Supported: No 00:23:49.011 00:23:49.011 Persistent Memory Region Support 00:23:49.011 ================================ 00:23:49.011 Supported: No 00:23:49.011 00:23:49.011 Admin Command Set Attributes 00:23:49.011 ============================ 00:23:49.011 Security Send/Receive: Not Supported 00:23:49.011 Format NVM: Not Supported 00:23:49.011 Firmware Activate/Download: Not Supported 00:23:49.011 Namespace Management: Not Supported 00:23:49.011 Device Self-Test: Not Supported 00:23:49.011 Directives: Not Supported 00:23:49.011 NVMe-MI: Not Supported 00:23:49.011 Virtualization Management: Not Supported 00:23:49.011 Doorbell Buffer Config: Not Supported 00:23:49.011 Get LBA Status Capability: Not Supported 00:23:49.011 Command & Feature Lockdown Capability: Not Supported 00:23:49.011 Abort Command Limit: 1 00:23:49.011 Async Event Request Limit: 4 00:23:49.011 Number of Firmware Slots: N/A 00:23:49.011 Firmware Slot 1 Read-Only: N/A 00:23:49.011 Firmware Activation Without Reset: N/A 00:23:49.011 Multiple Update Detection Support: N/A 00:23:49.011 Firmware Update Granularity: No Information Provided 00:23:49.011 Per-Namespace SMART Log: No 00:23:49.011 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.011 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:49.011 Command Effects Log Page: Not Supported 00:23:49.011 Get Log Page Extended Data: Supported 00:23:49.011 Telemetry Log Pages: Not Supported 00:23:49.011 Persistent Event Log Pages: Not Supported 00:23:49.011 Supported Log Pages Log Page: May Support 00:23:49.011 Commands Supported & Effects Log Page: Not Supported 00:23:49.011 Feature Identifiers & Effects Log Page:May Support 00:23:49.011 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.012 Data Area 4 for Telemetry Log: Not Supported 00:23:49.012 Error Log Page Entries Supported: 128 00:23:49.012 Keep Alive: Not Supported 00:23:49.012 00:23:49.012 NVM Command Set Attributes 00:23:49.012 ========================== 00:23:49.012 Submission Queue Entry Size 00:23:49.012 Max: 1 00:23:49.012 Min: 1 00:23:49.012 Completion Queue Entry Size 00:23:49.012 Max: 1 00:23:49.012 Min: 1 00:23:49.012 Number of Namespaces: 0 00:23:49.012 Compare Command: Not Supported 00:23:49.012 Write Uncorrectable Command: Not Supported 00:23:49.012 Dataset Management Command: Not Supported 00:23:49.012 Write Zeroes Command: Not Supported 00:23:49.012 Set Features Save Field: Not Supported 00:23:49.012 Reservations: Not Supported 00:23:49.012 Timestamp: Not Supported 00:23:49.012 Copy: Not Supported 00:23:49.012 Volatile Write Cache: Not Present 00:23:49.012 Atomic Write Unit (Normal): 1 00:23:49.012 Atomic Write Unit (PFail): 1 00:23:49.012 Atomic Compare & Write Unit: 1 00:23:49.012 Fused Compare & Write: Supported 00:23:49.012 Scatter-Gather List 00:23:49.012 SGL Command Set: Supported 00:23:49.012 SGL Keyed: Supported 00:23:49.012 SGL Bit Bucket Descriptor: Not Supported 00:23:49.012 SGL Metadata Pointer: Not Supported 00:23:49.012 Oversized SGL: Not Supported 00:23:49.012 SGL Metadata Address: Not Supported 00:23:49.012 SGL Offset: Supported 00:23:49.012 Transport SGL Data Block: Not Supported 00:23:49.012 Replay Protected Memory Block: Not Supported 00:23:49.012 00:23:49.012 Firmware Slot Information 00:23:49.012 ========================= 00:23:49.012 Active slot: 0 00:23:49.012 00:23:49.012 00:23:49.012 Error Log 00:23:49.012 ========= 00:23:49.012 00:23:49.012 Active Namespaces 00:23:49.012 ================= 00:23:49.012 Discovery Log Page 00:23:49.012 ================== 00:23:49.012 Generation Counter: 2 00:23:49.012 Number of Records: 2 00:23:49.012 Record Format: 0 00:23:49.012 00:23:49.012 Discovery Log Entry 0 00:23:49.012 ---------------------- 00:23:49.012 Transport Type: 3 (TCP) 00:23:49.012 Address Family: 1 (IPv4) 00:23:49.012 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:49.012 Entry Flags: 00:23:49.012 Duplicate Returned Information: 1 00:23:49.012 Explicit Persistent Connection Support for Discovery: 1 00:23:49.012 Transport Requirements: 00:23:49.012 Secure Channel: Not Required 00:23:49.012 Port ID: 0 (0x0000) 00:23:49.012 Controller ID: 65535 (0xffff) 00:23:49.012 Admin Max SQ Size: 128 00:23:49.012 Transport Service Identifier: 4420 00:23:49.012 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:49.012 Transport Address: 10.0.0.2 00:23:49.012 Discovery Log Entry 1 00:23:49.012 ---------------------- 00:23:49.012 Transport Type: 3 (TCP) 00:23:49.012 Address Family: 1 (IPv4) 00:23:49.012 Subsystem Type: 2 (NVM Subsystem) 00:23:49.012 Entry Flags: 00:23:49.012 Duplicate Returned Information: 0 00:23:49.012 Explicit Persistent Connection Support for Discovery: 0 00:23:49.012 Transport Requirements: 00:23:49.012 Secure Channel: Not Required 00:23:49.012 Port ID: 0 (0x0000) 00:23:49.012 Controller ID: 65535 (0xffff) 00:23:49.012 Admin Max SQ Size: 128 00:23:49.012 Transport Service Identifier: 4420 00:23:49.012 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:49.012 Transport Address: 10.0.0.2 [2024-12-06 19:21:59.394851] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:49.012 [2024-12-06 19:21:59.394876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990100) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.394890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.012 [2024-12-06 19:21:59.394899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990280) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.394907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.012 [2024-12-06 19:21:59.394919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990400) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.394928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.012 [2024-12-06 19:21:59.394936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.012 [2024-12-06 19:21:59.394957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.394966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.394973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.012 [2024-12-06 19:21:59.394984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.012 [2024-12-06 19:21:59.395025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.012 [2024-12-06 19:21:59.395187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.012 [2024-12-06 19:21:59.395202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.012 [2024-12-06 19:21:59.395210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.395229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.012 [2024-12-06 19:21:59.395255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.012 [2024-12-06 19:21:59.395282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.012 [2024-12-06 19:21:59.395380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.012 [2024-12-06 19:21:59.395392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.012 [2024-12-06 19:21:59.395399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.395415] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:49.012 [2024-12-06 19:21:59.395424] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:49.012 [2024-12-06 19:21:59.395440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.012 [2024-12-06 19:21:59.395466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.012 [2024-12-06 19:21:59.395488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.012 [2024-12-06 19:21:59.395569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.012 [2024-12-06 19:21:59.395583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.012 [2024-12-06 19:21:59.395591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.395615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.012 [2024-12-06 19:21:59.395646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.012 [2024-12-06 19:21:59.395676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.012 [2024-12-06 19:21:59.395756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.012 [2024-12-06 19:21:59.395769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.012 [2024-12-06 19:21:59.395776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.395799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.012 [2024-12-06 19:21:59.395826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.012 [2024-12-06 19:21:59.395847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.012 [2024-12-06 19:21:59.395935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.012 [2024-12-06 19:21:59.395949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.012 [2024-12-06 19:21:59.395957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.012 [2024-12-06 19:21:59.395980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.012 [2024-12-06 19:21:59.395996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.396126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.396149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.396298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.396321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.396471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.396494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.396645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.396675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.396828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.396851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.396867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.396878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.396899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.396980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.396994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.397001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.397024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.397051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.397077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.397156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.397169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.397176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.397199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.397225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.397246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.397326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.397338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.397345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.397368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.397394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.397415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.397493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.397505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.397512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.397535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.397551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.397561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.397581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.397659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.398707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.398716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.398723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.398741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.398751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.398757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192e690) 00:23:49.013 [2024-12-06 19:21:59.398768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.013 [2024-12-06 19:21:59.398790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1990580, cid 3, qid 0 00:23:49.013 [2024-12-06 19:21:59.398912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.013 [2024-12-06 19:21:59.398925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.013 [2024-12-06 19:21:59.398933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.013 [2024-12-06 19:21:59.398939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1990580) on tqpair=0x192e690 00:23:49.013 [2024-12-06 19:21:59.398953] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 3 milliseconds 00:23:49.013 00:23:49.013 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:49.013 [2024-12-06 19:21:59.438076] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:49.013 [2024-12-06 19:21:59.438124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179938 ] 00:23:49.013 [2024-12-06 19:21:59.496211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:49.013 [2024-12-06 19:21:59.496264] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:49.014 [2024-12-06 19:21:59.496274] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:49.014 [2024-12-06 19:21:59.496291] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:49.014 [2024-12-06 19:21:59.496303] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:49.014 [2024-12-06 19:21:59.496718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:49.014 [2024-12-06 19:21:59.496759] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2247690 0 00:23:49.014 [2024-12-06 19:21:59.502687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:49.014 [2024-12-06 19:21:59.502707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:49.014 [2024-12-06 19:21:59.502717] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:49.014 [2024-12-06 19:21:59.502723] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:49.014 [2024-12-06 19:21:59.502759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.502771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.502778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.502791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:49.014 [2024-12-06 19:21:59.502818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.510684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.510704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.510712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.510720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.510734] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:49.014 [2024-12-06 19:21:59.510745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:49.014 [2024-12-06 19:21:59.510757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:49.014 [2024-12-06 19:21:59.510783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.510793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.510800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.510811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.510836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.510967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.510999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.511007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.511027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:49.014 [2024-12-06 19:21:59.511044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:49.014 [2024-12-06 19:21:59.511059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.511085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.511112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.511190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.511202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.511209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.511227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:49.014 [2024-12-06 19:21:59.511242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.511281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.511302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.511382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.511396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.511403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.511418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.511465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.511487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.511565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.511580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.511587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.511601] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:49.014 [2024-12-06 19:21:59.511609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511733] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:49.014 [2024-12-06 19:21:59.511746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.511784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.511807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.511922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.511940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.511962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.511969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.511978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:49.014 [2024-12-06 19:21:59.511999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.512014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.512024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.512036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.512059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.512134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.014 [2024-12-06 19:21:59.512147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.014 [2024-12-06 19:21:59.512154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.512161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.014 [2024-12-06 19:21:59.512168] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:49.014 [2024-12-06 19:21:59.512177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:49.014 [2024-12-06 19:21:59.512195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:49.014 [2024-12-06 19:21:59.512215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:49.014 [2024-12-06 19:21:59.512236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.512245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.014 [2024-12-06 19:21:59.512268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.014 [2024-12-06 19:21:59.512290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.014 [2024-12-06 19:21:59.512450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.014 [2024-12-06 19:21:59.512466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.014 [2024-12-06 19:21:59.512474] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.014 [2024-12-06 19:21:59.512481] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=4096, cccid=0 00:23:49.014 [2024-12-06 19:21:59.512489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9100) on tqpair(0x2247690): expected_datao=0, payload_size=4096 00:23:49.014 [2024-12-06 19:21:59.512496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512514] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.015 [2024-12-06 19:21:59.512536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.015 [2024-12-06 19:21:59.512543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.015 [2024-12-06 19:21:59.512561] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:49.015 [2024-12-06 19:21:59.512569] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:49.015 [2024-12-06 19:21:59.512577] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:49.015 [2024-12-06 19:21:59.512584] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:49.015 [2024-12-06 19:21:59.512591] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:49.015 [2024-12-06 19:21:59.512599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.512614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.512626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.512661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.015 [2024-12-06 19:21:59.512697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.015 [2024-12-06 19:21:59.512828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.015 [2024-12-06 19:21:59.512843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.015 [2024-12-06 19:21:59.512850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.015 [2024-12-06 19:21:59.512880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.512905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.015 [2024-12-06 19:21:59.512916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.512937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.015 [2024-12-06 19:21:59.512947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.512975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.512984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.015 [2024-12-06 19:21:59.512994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.513015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.015 [2024-12-06 19:21:59.513039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.513096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.015 [2024-12-06 19:21:59.513119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9100, cid 0, qid 0 00:23:49.015 [2024-12-06 19:21:59.513145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9280, cid 1, qid 0 00:23:49.015 [2024-12-06 19:21:59.513153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9400, cid 2, qid 0 00:23:49.015 [2024-12-06 19:21:59.513161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.015 [2024-12-06 19:21:59.513168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.015 [2024-12-06 19:21:59.513309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.015 [2024-12-06 19:21:59.513324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.015 [2024-12-06 19:21:59.513331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.015 [2024-12-06 19:21:59.513346] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:49.015 [2024-12-06 19:21:59.513355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.513434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.015 [2024-12-06 19:21:59.513469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.015 [2024-12-06 19:21:59.513625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.015 [2024-12-06 19:21:59.513640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.015 [2024-12-06 19:21:59.513662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.015 [2024-12-06 19:21:59.513762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:49.015 [2024-12-06 19:21:59.513801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.513809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.015 [2024-12-06 19:21:59.513820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.015 [2024-12-06 19:21:59.513842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.015 [2024-12-06 19:21:59.513998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.015 [2024-12-06 19:21:59.514015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.015 [2024-12-06 19:21:59.514022] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.514028] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=4096, cccid=4 00:23:49.015 [2024-12-06 19:21:59.514036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9700) on tqpair(0x2247690): expected_datao=0, payload_size=4096 00:23:49.015 [2024-12-06 19:21:59.514043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.514060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.514069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.557677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.015 [2024-12-06 19:21:59.557697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.015 [2024-12-06 19:21:59.557705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.015 [2024-12-06 19:21:59.557712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.016 [2024-12-06 19:21:59.557738] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:49.016 [2024-12-06 19:21:59.557760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:49.016 [2024-12-06 19:21:59.557790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:49.016 [2024-12-06 19:21:59.557804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.016 [2024-12-06 19:21:59.557816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.016 [2024-12-06 19:21:59.557829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.016 [2024-12-06 19:21:59.557854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.016 [2024-12-06 19:21:59.558033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.016 [2024-12-06 19:21:59.558048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.016 [2024-12-06 19:21:59.558055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.016 [2024-12-06 19:21:59.558061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=4096, cccid=4 00:23:49.016 [2024-12-06 19:21:59.558069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9700) on tqpair(0x2247690): expected_datao=0, payload_size=4096 00:23:49.016 [2024-12-06 19:21:59.558076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.016 [2024-12-06 19:21:59.558093] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.016 [2024-12-06 19:21:59.558102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.598761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.312 [2024-12-06 19:21:59.598782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.312 [2024-12-06 19:21:59.598790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.598797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.312 [2024-12-06 19:21:59.598821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.598843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.598858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.598867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.312 [2024-12-06 19:21:59.598879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.312 [2024-12-06 19:21:59.598903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.312 [2024-12-06 19:21:59.599008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.312 [2024-12-06 19:21:59.599024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.312 [2024-12-06 19:21:59.599031] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.599037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=4096, cccid=4 00:23:49.312 [2024-12-06 19:21:59.599045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9700) on tqpair(0x2247690): expected_datao=0, payload_size=4096 00:23:49.312 [2024-12-06 19:21:59.599052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.599070] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.599080] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.639746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.312 [2024-12-06 19:21:59.639767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.312 [2024-12-06 19:21:59.639776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.312 [2024-12-06 19:21:59.639783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.312 [2024-12-06 19:21:59.639804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:49.312 [2024-12-06 19:21:59.639883] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:49.312 [2024-12-06 19:21:59.639891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:49.313 [2024-12-06 19:21:59.639900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:49.313 [2024-12-06 19:21:59.639920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.639929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.639941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.639953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.639960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.639967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.639976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.313 [2024-12-06 19:21:59.640019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.313 [2024-12-06 19:21:59.640032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9880, cid 5, qid 0 00:23:49.313 [2024-12-06 19:21:59.640131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.313 [2024-12-06 19:21:59.640143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.313 [2024-12-06 19:21:59.640150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.313 [2024-12-06 19:21:59.640168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.313 [2024-12-06 19:21:59.640178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.313 [2024-12-06 19:21:59.640184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9880) on tqpair=0x2247690 00:23:49.313 [2024-12-06 19:21:59.640206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.640225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.640246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9880, cid 5, qid 0 00:23:49.313 [2024-12-06 19:21:59.640328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.313 [2024-12-06 19:21:59.640342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.313 [2024-12-06 19:21:59.640348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9880) on tqpair=0x2247690 00:23:49.313 [2024-12-06 19:21:59.640374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.640395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.640415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9880, cid 5, qid 0 00:23:49.313 [2024-12-06 19:21:59.640491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.313 [2024-12-06 19:21:59.640505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.313 [2024-12-06 19:21:59.640512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9880) on tqpair=0x2247690 00:23:49.313 [2024-12-06 19:21:59.640533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.640542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.640554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.640584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9880, cid 5, qid 0 00:23:49.313 [2024-12-06 19:21:59.644680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.313 [2024-12-06 19:21:59.644697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.313 [2024-12-06 19:21:59.644704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.644711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9880) on tqpair=0x2247690 00:23:49.313 [2024-12-06 19:21:59.644738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.644749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.644761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.644775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.644783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.644792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.644805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.644813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.644822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.644834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.644842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2247690) 00:23:49.313 [2024-12-06 19:21:59.644851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.313 [2024-12-06 19:21:59.644875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9880, cid 5, qid 0 00:23:49.313 [2024-12-06 19:21:59.644886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9700, cid 4, qid 0 00:23:49.313 [2024-12-06 19:21:59.644894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9a00, cid 6, qid 0 00:23:49.313 [2024-12-06 19:21:59.644901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9b80, cid 7, qid 0 00:23:49.313 [2024-12-06 19:21:59.645104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.313 [2024-12-06 19:21:59.645126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.313 [2024-12-06 19:21:59.645138] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645144] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=8192, cccid=5 00:23:49.313 [2024-12-06 19:21:59.645152] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9880) on tqpair(0x2247690): expected_datao=0, payload_size=8192 00:23:49.313 [2024-12-06 19:21:59.645159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645179] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645188] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.313 [2024-12-06 19:21:59.645210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.313 [2024-12-06 19:21:59.645216] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=512, cccid=4 00:23:49.313 [2024-12-06 19:21:59.645230] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9700) on tqpair(0x2247690): expected_datao=0, payload_size=512 00:23:49.313 [2024-12-06 19:21:59.645237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645253] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.313 [2024-12-06 19:21:59.645265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.314 [2024-12-06 19:21:59.645274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.314 [2024-12-06 19:21:59.645280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645286] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=512, cccid=6 00:23:49.314 [2024-12-06 19:21:59.645294] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9a00) on tqpair(0x2247690): expected_datao=0, payload_size=512 00:23:49.314 [2024-12-06 19:21:59.645301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645310] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645316] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.314 [2024-12-06 19:21:59.645333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.314 [2024-12-06 19:21:59.645339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645345] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247690): datao=0, datal=4096, cccid=7 00:23:49.314 [2024-12-06 19:21:59.645353] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a9b80) on tqpair(0x2247690): expected_datao=0, payload_size=4096 00:23:49.314 [2024-12-06 19:21:59.645360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645376] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.314 [2024-12-06 19:21:59.645392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.314 [2024-12-06 19:21:59.645399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9880) on tqpair=0x2247690 00:23:49.314 [2024-12-06 19:21:59.645441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.314 [2024-12-06 19:21:59.645452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.314 [2024-12-06 19:21:59.645458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9700) on tqpair=0x2247690 00:23:49.314 [2024-12-06 19:21:59.645484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.314 [2024-12-06 19:21:59.645494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.314 [2024-12-06 19:21:59.645501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9a00) on tqpair=0x2247690 00:23:49.314 [2024-12-06 19:21:59.645517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.314 [2024-12-06 19:21:59.645526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.314 [2024-12-06 19:21:59.645533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.314 [2024-12-06 19:21:59.645539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9b80) on tqpair=0x2247690 00:23:49.314 ===================================================== 00:23:49.314 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.314 ===================================================== 00:23:49.314 Controller Capabilities/Features 00:23:49.314 ================================ 00:23:49.314 Vendor ID: 8086 00:23:49.314 Subsystem Vendor ID: 8086 00:23:49.314 Serial Number: SPDK00000000000001 00:23:49.314 Model Number: SPDK bdev Controller 00:23:49.314 Firmware Version: 25.01 00:23:49.314 Recommended Arb Burst: 6 00:23:49.314 IEEE OUI Identifier: e4 d2 5c 00:23:49.314 Multi-path I/O 00:23:49.314 May have multiple subsystem ports: Yes 00:23:49.314 May have multiple controllers: Yes 00:23:49.314 Associated with SR-IOV VF: No 00:23:49.314 Max Data Transfer Size: 131072 00:23:49.314 Max Number of Namespaces: 32 00:23:49.314 Max Number of I/O Queues: 127 00:23:49.314 NVMe Specification Version (VS): 1.3 00:23:49.314 NVMe Specification Version (Identify): 1.3 00:23:49.314 Maximum Queue Entries: 128 00:23:49.314 Contiguous Queues Required: Yes 00:23:49.314 Arbitration Mechanisms Supported 00:23:49.314 Weighted Round Robin: Not Supported 00:23:49.314 Vendor Specific: Not Supported 00:23:49.314 Reset Timeout: 15000 ms 00:23:49.314 Doorbell Stride: 4 bytes 00:23:49.314 NVM Subsystem Reset: Not Supported 00:23:49.314 Command Sets Supported 00:23:49.314 NVM Command Set: Supported 00:23:49.314 Boot Partition: Not Supported 00:23:49.314 Memory Page Size Minimum: 4096 bytes 00:23:49.314 Memory Page Size Maximum: 4096 bytes 00:23:49.314 Persistent Memory Region: Not Supported 00:23:49.314 Optional Asynchronous Events Supported 00:23:49.314 Namespace Attribute Notices: Supported 00:23:49.314 Firmware Activation Notices: Not Supported 00:23:49.314 ANA Change Notices: Not Supported 00:23:49.314 PLE Aggregate Log Change Notices: Not Supported 00:23:49.314 LBA Status Info Alert Notices: Not Supported 00:23:49.314 EGE Aggregate Log Change Notices: Not Supported 00:23:49.314 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.314 Zone Descriptor Change Notices: Not Supported 00:23:49.314 Discovery Log Change Notices: Not Supported 00:23:49.314 Controller Attributes 00:23:49.314 128-bit Host Identifier: Supported 00:23:49.314 Non-Operational Permissive Mode: Not Supported 00:23:49.314 NVM Sets: Not Supported 00:23:49.314 Read Recovery Levels: Not Supported 00:23:49.314 Endurance Groups: Not Supported 00:23:49.314 Predictable Latency Mode: Not Supported 00:23:49.314 Traffic Based Keep ALive: Not Supported 00:23:49.314 Namespace Granularity: Not Supported 00:23:49.314 SQ Associations: Not Supported 00:23:49.314 UUID List: Not Supported 00:23:49.314 Multi-Domain Subsystem: Not Supported 00:23:49.314 Fixed Capacity Management: Not Supported 00:23:49.314 Variable Capacity Management: Not Supported 00:23:49.314 Delete Endurance Group: Not Supported 00:23:49.314 Delete NVM Set: Not Supported 00:23:49.314 Extended LBA Formats Supported: Not Supported 00:23:49.314 Flexible Data Placement Supported: Not Supported 00:23:49.314 00:23:49.314 Controller Memory Buffer Support 00:23:49.314 ================================ 00:23:49.314 Supported: No 00:23:49.314 00:23:49.314 Persistent Memory Region Support 00:23:49.314 ================================ 00:23:49.314 Supported: No 00:23:49.314 00:23:49.314 Admin Command Set Attributes 00:23:49.314 ============================ 00:23:49.314 Security Send/Receive: Not Supported 00:23:49.314 Format NVM: Not Supported 00:23:49.314 Firmware Activate/Download: Not Supported 00:23:49.314 Namespace Management: Not Supported 00:23:49.314 Device Self-Test: Not Supported 00:23:49.314 Directives: Not Supported 00:23:49.314 NVMe-MI: Not Supported 00:23:49.314 Virtualization Management: Not Supported 00:23:49.314 Doorbell Buffer Config: Not Supported 00:23:49.314 Get LBA Status Capability: Not Supported 00:23:49.314 Command & Feature Lockdown Capability: Not Supported 00:23:49.314 Abort Command Limit: 4 00:23:49.314 Async Event Request Limit: 4 00:23:49.314 Number of Firmware Slots: N/A 00:23:49.315 Firmware Slot 1 Read-Only: N/A 00:23:49.315 Firmware Activation Without Reset: N/A 00:23:49.315 Multiple Update Detection Support: N/A 00:23:49.315 Firmware Update Granularity: No Information Provided 00:23:49.315 Per-Namespace SMART Log: No 00:23:49.315 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.315 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:49.315 Command Effects Log Page: Supported 00:23:49.315 Get Log Page Extended Data: Supported 00:23:49.315 Telemetry Log Pages: Not Supported 00:23:49.315 Persistent Event Log Pages: Not Supported 00:23:49.315 Supported Log Pages Log Page: May Support 00:23:49.315 Commands Supported & Effects Log Page: Not Supported 00:23:49.315 Feature Identifiers & Effects Log Page:May Support 00:23:49.315 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.315 Data Area 4 for Telemetry Log: Not Supported 00:23:49.315 Error Log Page Entries Supported: 128 00:23:49.315 Keep Alive: Supported 00:23:49.315 Keep Alive Granularity: 10000 ms 00:23:49.315 00:23:49.315 NVM Command Set Attributes 00:23:49.315 ========================== 00:23:49.315 Submission Queue Entry Size 00:23:49.315 Max: 64 00:23:49.315 Min: 64 00:23:49.315 Completion Queue Entry Size 00:23:49.315 Max: 16 00:23:49.315 Min: 16 00:23:49.315 Number of Namespaces: 32 00:23:49.315 Compare Command: Supported 00:23:49.315 Write Uncorrectable Command: Not Supported 00:23:49.315 Dataset Management Command: Supported 00:23:49.315 Write Zeroes Command: Supported 00:23:49.315 Set Features Save Field: Not Supported 00:23:49.315 Reservations: Supported 00:23:49.315 Timestamp: Not Supported 00:23:49.315 Copy: Supported 00:23:49.315 Volatile Write Cache: Present 00:23:49.315 Atomic Write Unit (Normal): 1 00:23:49.315 Atomic Write Unit (PFail): 1 00:23:49.315 Atomic Compare & Write Unit: 1 00:23:49.315 Fused Compare & Write: Supported 00:23:49.315 Scatter-Gather List 00:23:49.315 SGL Command Set: Supported 00:23:49.315 SGL Keyed: Supported 00:23:49.315 SGL Bit Bucket Descriptor: Not Supported 00:23:49.315 SGL Metadata Pointer: Not Supported 00:23:49.315 Oversized SGL: Not Supported 00:23:49.315 SGL Metadata Address: Not Supported 00:23:49.315 SGL Offset: Supported 00:23:49.315 Transport SGL Data Block: Not Supported 00:23:49.315 Replay Protected Memory Block: Not Supported 00:23:49.315 00:23:49.315 Firmware Slot Information 00:23:49.315 ========================= 00:23:49.315 Active slot: 1 00:23:49.315 Slot 1 Firmware Revision: 25.01 00:23:49.315 00:23:49.315 00:23:49.315 Commands Supported and Effects 00:23:49.315 ============================== 00:23:49.315 Admin Commands 00:23:49.315 -------------- 00:23:49.315 Get Log Page (02h): Supported 00:23:49.315 Identify (06h): Supported 00:23:49.315 Abort (08h): Supported 00:23:49.315 Set Features (09h): Supported 00:23:49.315 Get Features (0Ah): Supported 00:23:49.315 Asynchronous Event Request (0Ch): Supported 00:23:49.315 Keep Alive (18h): Supported 00:23:49.315 I/O Commands 00:23:49.315 ------------ 00:23:49.315 Flush (00h): Supported LBA-Change 00:23:49.315 Write (01h): Supported LBA-Change 00:23:49.315 Read (02h): Supported 00:23:49.315 Compare (05h): Supported 00:23:49.315 Write Zeroes (08h): Supported LBA-Change 00:23:49.315 Dataset Management (09h): Supported LBA-Change 00:23:49.315 Copy (19h): Supported LBA-Change 00:23:49.315 00:23:49.315 Error Log 00:23:49.315 ========= 00:23:49.315 00:23:49.315 Arbitration 00:23:49.315 =========== 00:23:49.315 Arbitration Burst: 1 00:23:49.315 00:23:49.315 Power Management 00:23:49.315 ================ 00:23:49.315 Number of Power States: 1 00:23:49.315 Current Power State: Power State #0 00:23:49.315 Power State #0: 00:23:49.315 Max Power: 0.00 W 00:23:49.315 Non-Operational State: Operational 00:23:49.315 Entry Latency: Not Reported 00:23:49.315 Exit Latency: Not Reported 00:23:49.315 Relative Read Throughput: 0 00:23:49.315 Relative Read Latency: 0 00:23:49.315 Relative Write Throughput: 0 00:23:49.315 Relative Write Latency: 0 00:23:49.315 Idle Power: Not Reported 00:23:49.315 Active Power: Not Reported 00:23:49.315 Non-Operational Permissive Mode: Not Supported 00:23:49.315 00:23:49.315 Health Information 00:23:49.315 ================== 00:23:49.315 Critical Warnings: 00:23:49.315 Available Spare Space: OK 00:23:49.315 Temperature: OK 00:23:49.315 Device Reliability: OK 00:23:49.315 Read Only: No 00:23:49.315 Volatile Memory Backup: OK 00:23:49.315 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:49.315 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:49.315 Available Spare: 0% 00:23:49.315 Available Spare Threshold: 0% 00:23:49.315 Life Percentage Used:[2024-12-06 19:21:59.645670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.315 [2024-12-06 19:21:59.645684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2247690) 00:23:49.315 [2024-12-06 19:21:59.645695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-12-06 19:21:59.645719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9b80, cid 7, qid 0 00:23:49.315 [2024-12-06 19:21:59.645828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.315 [2024-12-06 19:21:59.645842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.315 [2024-12-06 19:21:59.645849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.315 [2024-12-06 19:21:59.645856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9b80) on tqpair=0x2247690 00:23:49.315 [2024-12-06 19:21:59.645908] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:49.315 [2024-12-06 19:21:59.645928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9100) on tqpair=0x2247690 00:23:49.315 [2024-12-06 19:21:59.645939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.315 [2024-12-06 19:21:59.645948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9280) on tqpair=0x2247690 00:23:49.315 [2024-12-06 19:21:59.645956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.315 [2024-12-06 19:21:59.645964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9400) on tqpair=0x2247690 00:23:49.315 [2024-12-06 19:21:59.645987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.316 [2024-12-06 19:21:59.645996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.646003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.316 [2024-12-06 19:21:59.646015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.646055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.646078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.646222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.646236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.646243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.646267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.646293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.646319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.646411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.646424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.646431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.646445] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:49.316 [2024-12-06 19:21:59.646453] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:49.316 [2024-12-06 19:21:59.646468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.646493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.646513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.646590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.646604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.646611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.646633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.646684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.646706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.646800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.646813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.646820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.646842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.646858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.646869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.646890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.646965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.646978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.646988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.647028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.647053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.647074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.647151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.647162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.647169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.647191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.647216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.647236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.647310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.647322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.647329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.647351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.647376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.647396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.647486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.647498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.647505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.647526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.316 [2024-12-06 19:21:59.647552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-12-06 19:21:59.647571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.316 [2024-12-06 19:21:59.647678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.316 [2024-12-06 19:21:59.647692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.316 [2024-12-06 19:21:59.647699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.316 [2024-12-06 19:21:59.647710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.316 [2024-12-06 19:21:59.647728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.647738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.647744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.647755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.647776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.647854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.647867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.647874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.647880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.647896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.647905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.647912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.647922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.647943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.648051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.648065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.648072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.648095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.648120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.648141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.648218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.648232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.648239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.648261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.648286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.648306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.648382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.648395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.648402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.648428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.648454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.648474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.648549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.648561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.648568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.648590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.648605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.648615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.648635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.652681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.652698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.652706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.652713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.652729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.652739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.652746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247690) 00:23:49.317 [2024-12-06 19:21:59.652756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.317 [2024-12-06 19:21:59.652779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a9580, cid 3, qid 0 00:23:49.317 [2024-12-06 19:21:59.652891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.317 [2024-12-06 19:21:59.652903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.317 [2024-12-06 19:21:59.652910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.317 [2024-12-06 19:21:59.652917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a9580) on tqpair=0x2247690 00:23:49.317 [2024-12-06 19:21:59.652929] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:23:49.317 0% 00:23:49.317 Data Units Read: 0 00:23:49.317 Data Units Written: 0 00:23:49.317 Host Read Commands: 0 00:23:49.317 Host Write Commands: 0 00:23:49.317 Controller Busy Time: 0 minutes 00:23:49.317 Power Cycles: 0 00:23:49.317 Power On Hours: 0 hours 00:23:49.317 Unsafe Shutdowns: 0 00:23:49.317 Unrecoverable Media Errors: 0 00:23:49.317 Lifetime Error Log Entries: 0 00:23:49.317 Warning Temperature Time: 0 minutes 00:23:49.317 Critical Temperature Time: 0 minutes 00:23:49.317 00:23:49.317 Number of Queues 00:23:49.317 ================ 00:23:49.317 Number of I/O Submission Queues: 127 00:23:49.317 Number of I/O Completion Queues: 127 00:23:49.317 00:23:49.317 Active Namespaces 00:23:49.317 ================= 00:23:49.317 Namespace ID:1 00:23:49.317 Error Recovery Timeout: Unlimited 00:23:49.317 Command Set Identifier: NVM (00h) 00:23:49.317 Deallocate: Supported 00:23:49.317 Deallocated/Unwritten Error: Not Supported 00:23:49.317 Deallocated Read Value: Unknown 00:23:49.317 Deallocate in Write Zeroes: Not Supported 00:23:49.317 Deallocated Guard Field: 0xFFFF 00:23:49.317 Flush: Supported 00:23:49.317 Reservation: Supported 00:23:49.317 Namespace Sharing Capabilities: Multiple Controllers 00:23:49.317 Size (in LBAs): 131072 (0GiB) 00:23:49.317 Capacity (in LBAs): 131072 (0GiB) 00:23:49.317 Utilization (in LBAs): 131072 (0GiB) 00:23:49.317 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:49.317 EUI64: ABCDEF0123456789 00:23:49.317 UUID: b6f943d7-fed0-4152-a403-ecc85746acc1 00:23:49.318 Thin Provisioning: Not Supported 00:23:49.318 Per-NS Atomic Units: Yes 00:23:49.318 Atomic Boundary Size (Normal): 0 00:23:49.318 Atomic Boundary Size (PFail): 0 00:23:49.318 Atomic Boundary Offset: 0 00:23:49.318 Maximum Single Source Range Length: 65535 00:23:49.318 Maximum Copy Length: 65535 00:23:49.318 Maximum Source Range Count: 1 00:23:49.318 NGUID/EUI64 Never Reused: No 00:23:49.318 Namespace Write Protected: No 00:23:49.318 Number of LBA Formats: 1 00:23:49.318 Current LBA Format: LBA Format #00 00:23:49.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:49.318 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.318 rmmod nvme_tcp 00:23:49.318 rmmod nvme_fabrics 00:23:49.318 rmmod nvme_keyring 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1179794 ']' 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1179794 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1179794 ']' 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1179794 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179794 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179794' 00:23:49.318 killing process with pid 1179794 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1179794 00:23:49.318 19:21:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1179794 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.603 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.604 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.604 19:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.509 00:23:51.509 real 0m5.787s 00:23:51.509 user 0m5.101s 00:23:51.509 sys 0m2.069s 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.509 ************************************ 00:23:51.509 END TEST nvmf_identify 00:23:51.509 ************************************ 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.509 19:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.768 ************************************ 00:23:51.768 START TEST nvmf_perf 00:23:51.768 ************************************ 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:51.768 * Looking for test storage... 00:23:51.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.768 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.769 --rc genhtml_branch_coverage=1 00:23:51.769 --rc genhtml_function_coverage=1 00:23:51.769 --rc genhtml_legend=1 00:23:51.769 --rc geninfo_all_blocks=1 00:23:51.769 --rc geninfo_unexecuted_blocks=1 00:23:51.769 00:23:51.769 ' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.769 --rc genhtml_branch_coverage=1 00:23:51.769 --rc genhtml_function_coverage=1 00:23:51.769 --rc genhtml_legend=1 00:23:51.769 --rc geninfo_all_blocks=1 00:23:51.769 --rc geninfo_unexecuted_blocks=1 00:23:51.769 00:23:51.769 ' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.769 --rc genhtml_branch_coverage=1 00:23:51.769 --rc genhtml_function_coverage=1 00:23:51.769 --rc genhtml_legend=1 00:23:51.769 --rc geninfo_all_blocks=1 00:23:51.769 --rc geninfo_unexecuted_blocks=1 00:23:51.769 00:23:51.769 ' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.769 --rc genhtml_branch_coverage=1 00:23:51.769 --rc genhtml_function_coverage=1 00:23:51.769 --rc genhtml_legend=1 00:23:51.769 --rc geninfo_all_blocks=1 00:23:51.769 --rc geninfo_unexecuted_blocks=1 00:23:51.769 00:23:51.769 ' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.769 19:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.304 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:54.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:54.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:54.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:54.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:54.305 00:23:54.305 --- 10.0.0.2 ping statistics --- 00:23:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.305 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:23:54.305 00:23:54.305 --- 10.0.0.1 ping statistics --- 00:23:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.305 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1181884 00:23:54.305 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1181884 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1181884 ']' 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.306 [2024-12-06 19:22:04.615374] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:23:54.306 [2024-12-06 19:22:04.615459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.306 [2024-12-06 19:22:04.685501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.306 [2024-12-06 19:22:04.740406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.306 [2024-12-06 19:22:04.740462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.306 [2024-12-06 19:22:04.740490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.306 [2024-12-06 19:22:04.740501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.306 [2024-12-06 19:22:04.740510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.306 [2024-12-06 19:22:04.741950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.306 [2024-12-06 19:22:04.742072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.306 [2024-12-06 19:22:04.742140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.306 [2024-12-06 19:22:04.742144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.306 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.563 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.563 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:54.563 19:22:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:57.834 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:57.834 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:57.834 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:23:57.834 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.396 [2024-12-06 19:22:08.913547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.396 19:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.960 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:58.960 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.218 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:59.218 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:59.481 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.739 [2024-12-06 19:22:10.069786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.739 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:59.997 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:23:59.997 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:23:59.997 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:59.997 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:24:01.393 Initializing NVMe Controllers 00:24:01.393 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:24:01.393 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:24:01.393 Initialization complete. Launching workers. 00:24:01.393 ======================================================== 00:24:01.393 Latency(us) 00:24:01.393 Device Information : IOPS MiB/s Average min max 00:24:01.393 PCIE (0000:88:00.0) NSID 1 from core 0: 82692.42 323.02 386.28 43.23 5288.12 00:24:01.393 ======================================================== 00:24:01.393 Total : 82692.42 323.02 386.28 43.23 5288.12 00:24:01.393 00:24:01.393 19:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:02.766 Initializing NVMe Controllers 00:24:02.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.766 Initialization complete. Launching workers. 00:24:02.766 ======================================================== 00:24:02.766 Latency(us) 00:24:02.766 Device Information : IOPS MiB/s Average min max 00:24:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 120.69 0.47 8286.70 136.99 45821.61 00:24:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.89 0.16 24058.73 6981.18 47902.37 00:24:02.766 ======================================================== 00:24:02.766 Total : 162.58 0.64 12350.66 136.99 47902.37 00:24:02.766 00:24:02.766 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.138 Initializing NVMe Controllers 00:24:04.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.138 Initialization complete. Launching workers. 00:24:04.138 ======================================================== 00:24:04.138 Latency(us) 00:24:04.138 Device Information : IOPS MiB/s Average min max 00:24:04.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8322.97 32.51 3862.58 679.84 10309.28 00:24:04.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3837.98 14.99 8376.20 6767.16 18734.39 00:24:04.138 ======================================================== 00:24:04.138 Total : 12160.95 47.50 5287.07 679.84 18734.39 00:24:04.138 00:24:04.138 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:04.138 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:04.138 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.668 Initializing NVMe Controllers 00:24:06.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.668 Controller IO queue size 128, less than required. 00:24:06.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.668 Controller IO queue size 128, less than required. 00:24:06.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:06.668 Initialization complete. Launching workers. 00:24:06.668 ======================================================== 00:24:06.668 Latency(us) 00:24:06.668 Device Information : IOPS MiB/s Average min max 00:24:06.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.97 421.99 76700.74 53206.74 134855.23 00:24:06.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.80 146.70 229680.54 86376.05 327461.57 00:24:06.668 ======================================================== 00:24:06.668 Total : 2274.77 568.69 116163.15 53206.74 327461.57 00:24:06.668 00:24:06.668 19:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:06.927 No valid NVMe controllers or AIO or URING devices found 00:24:06.927 Initializing NVMe Controllers 00:24:06.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.927 Controller IO queue size 128, less than required. 00:24:06.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.927 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:06.927 Controller IO queue size 128, less than required. 00:24:06.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.927 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:06.927 WARNING: Some requested NVMe devices were skipped 00:24:06.927 19:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:09.456 Initializing NVMe Controllers 00:24:09.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.456 Controller IO queue size 128, less than required. 00:24:09.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.456 Controller IO queue size 128, less than required. 00:24:09.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.456 Initialization complete. Launching workers. 00:24:09.456 00:24:09.456 ==================== 00:24:09.456 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:09.456 TCP transport: 00:24:09.456 polls: 9239 00:24:09.456 idle_polls: 6132 00:24:09.456 sock_completions: 3107 00:24:09.456 nvme_completions: 5989 00:24:09.456 submitted_requests: 9032 00:24:09.456 queued_requests: 1 00:24:09.456 00:24:09.456 ==================== 00:24:09.456 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:09.456 TCP transport: 00:24:09.456 polls: 12700 00:24:09.456 idle_polls: 9390 00:24:09.456 sock_completions: 3310 00:24:09.456 nvme_completions: 6021 00:24:09.456 submitted_requests: 8990 00:24:09.456 queued_requests: 1 00:24:09.456 ======================================================== 00:24:09.456 Latency(us) 00:24:09.456 Device Information : IOPS MiB/s Average min max 00:24:09.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1496.85 374.21 86625.40 46264.07 138441.83 00:24:09.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1504.85 376.21 86378.05 44470.08 130596.22 00:24:09.456 ======================================================== 00:24:09.456 Total : 3001.70 750.42 86501.39 44470.08 138441.83 00:24:09.456 00:24:09.456 19:22:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:09.456 19:22:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.715 rmmod nvme_tcp 00:24:09.715 rmmod nvme_fabrics 00:24:09.715 rmmod nvme_keyring 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1181884 ']' 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1181884 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1181884 ']' 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1181884 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.715 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1181884 00:24:09.973 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.973 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.973 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1181884' 00:24:09.973 killing process with pid 1181884 00:24:09.973 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1181884 00:24:09.973 19:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1181884 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.873 19:22:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.795 19:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.795 00:24:13.795 real 0m21.891s 00:24:13.795 user 1m7.570s 00:24:13.795 sys 0m5.696s 00:24:13.795 19:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.795 19:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:13.795 ************************************ 00:24:13.795 END TEST nvmf_perf 00:24:13.795 ************************************ 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.795 ************************************ 00:24:13.795 START TEST nvmf_fio_host 00:24:13.795 ************************************ 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:13.795 * Looking for test storage... 00:24:13.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:13.795 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:13.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.796 --rc genhtml_branch_coverage=1 00:24:13.796 --rc genhtml_function_coverage=1 00:24:13.796 --rc genhtml_legend=1 00:24:13.796 --rc geninfo_all_blocks=1 00:24:13.796 --rc geninfo_unexecuted_blocks=1 00:24:13.796 00:24:13.796 ' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:13.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.796 --rc genhtml_branch_coverage=1 00:24:13.796 --rc genhtml_function_coverage=1 00:24:13.796 --rc genhtml_legend=1 00:24:13.796 --rc geninfo_all_blocks=1 00:24:13.796 --rc geninfo_unexecuted_blocks=1 00:24:13.796 00:24:13.796 ' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:13.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.796 --rc genhtml_branch_coverage=1 00:24:13.796 --rc genhtml_function_coverage=1 00:24:13.796 --rc genhtml_legend=1 00:24:13.796 --rc geninfo_all_blocks=1 00:24:13.796 --rc geninfo_unexecuted_blocks=1 00:24:13.796 00:24:13.796 ' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:13.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.796 --rc genhtml_branch_coverage=1 00:24:13.796 --rc genhtml_function_coverage=1 00:24:13.796 --rc genhtml_legend=1 00:24:13.796 --rc geninfo_all_blocks=1 00:24:13.796 --rc geninfo_unexecuted_blocks=1 00:24:13.796 00:24:13.796 ' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.796 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.797 19:22:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:16.323 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:16.323 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:16.323 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.323 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:16.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:16.324 00:24:16.324 --- 10.0.0.2 ping statistics --- 00:24:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.324 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:24:16.324 00:24:16.324 --- 10.0.0.1 ping statistics --- 00:24:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.324 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1185980 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1185980 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1185980 ']' 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 [2024-12-06 19:22:26.625374] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:16.324 [2024-12-06 19:22:26.625463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.324 [2024-12-06 19:22:26.697101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.324 [2024-12-06 19:22:26.754788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.324 [2024-12-06 19:22:26.754842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.324 [2024-12-06 19:22:26.754869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.324 [2024-12-06 19:22:26.754880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.324 [2024-12-06 19:22:26.754890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.324 [2024-12-06 19:22:26.756490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.324 [2024-12-06 19:22:26.756546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.324 [2024-12-06 19:22:26.756592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.324 [2024-12-06 19:22:26.756595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:16.324 19:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:16.582 [2024-12-06 19:22:27.133879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.582 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:16.840 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.840 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:17.098 Malloc1 00:24:17.098 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.355 19:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:17.613 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.871 [2024-12-06 19:22:28.353138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.871 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:18.129 19:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.388 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:18.388 fio-3.35 00:24:18.388 Starting 1 thread 00:24:20.939 [2024-12-06 19:22:31.222740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.222988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.223000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.223011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 [2024-12-06 19:22:31.223023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485700 is same with the state(6) to be set 00:24:20.939 00:24:20.939 test: (groupid=0, jobs=1): err= 0: pid=1186342: Fri Dec 6 19:22:31 2024 00:24:20.939 read: IOPS=8786, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec) 00:24:20.939 slat (nsec): min=1911, max=163600, avg=2501.97, stdev=1943.65 00:24:20.939 clat (usec): min=2614, max=13560, avg=7953.04, stdev=707.34 00:24:20.939 lat (usec): min=2646, max=13563, avg=7955.54, stdev=707.23 00:24:20.939 clat percentiles (usec): 00:24:20.939 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:24:20.939 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:24:20.939 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:24:20.939 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[12256], 99.95th=[12780], 00:24:20.939 | 99.99th=[13435] 00:24:20.939 bw ( KiB/s): min=34104, max=35664, per=99.97%, avg=35136.00, stdev=700.20, samples=4 00:24:20.939 iops : min= 8526, max= 8916, avg=8784.00, stdev=175.05, samples=4 00:24:20.939 write: IOPS=8792, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec); 0 zone resets 00:24:20.939 slat (usec): min=2, max=133, avg= 2.63, stdev= 1.49 00:24:20.939 clat (usec): min=1455, max=13366, avg=6571.41, stdev=607.73 00:24:20.939 lat (usec): min=1465, max=13369, avg=6574.04, stdev=607.69 00:24:20.939 clat percentiles (usec): 00:24:20.939 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:24:20.939 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:24:20.939 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7373], 00:24:20.939 | 99.00th=[ 7963], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[11731], 00:24:20.939 | 99.99th=[13304] 00:24:20.939 bw ( KiB/s): min=34528, max=35600, per=100.00%, avg=35172.00, stdev=512.23, samples=4 00:24:20.939 iops : min= 8632, max= 8900, avg=8793.00, stdev=128.06, samples=4 00:24:20.939 lat (msec) : 2=0.02%, 4=0.09%, 10=99.47%, 20=0.42% 00:24:20.939 cpu : usr=66.20%, sys=32.15%, ctx=80, majf=0, minf=35 00:24:20.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:20.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:20.939 issued rwts: total=17635,17646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:20.939 00:24:20.939 Run status group 0 (all jobs): 00:24:20.939 READ: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.2MB), run=2007-2007msec 00:24:20.939 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.3MB), run=2007-2007msec 00:24:20.939 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:20.939 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:20.939 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:20.939 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:20.939 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:20.940 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:20.940 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:20.940 fio-3.35 00:24:20.940 Starting 1 thread 00:24:23.468 00:24:23.468 test: (groupid=0, jobs=1): err= 0: pid=1186677: Fri Dec 6 19:22:33 2024 00:24:23.468 read: IOPS=8246, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:24:23.468 slat (usec): min=2, max=131, avg= 3.62, stdev= 1.82 00:24:23.468 clat (usec): min=2332, max=17166, avg=8856.09, stdev=2140.89 00:24:23.468 lat (usec): min=2336, max=17170, avg=8859.70, stdev=2140.94 00:24:23.468 clat percentiles (usec): 00:24:23.468 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 6980], 00:24:23.468 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:24:23.468 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11469], 95.00th=[12518], 00:24:23.468 | 99.00th=[14353], 99.50th=[15401], 99.90th=[16909], 99.95th=[16909], 00:24:23.468 | 99.99th=[17171] 00:24:23.468 bw ( KiB/s): min=63072, max=77856, per=52.06%, avg=68696.00, stdev=7050.00, samples=4 00:24:23.468 iops : min= 3942, max= 4866, avg=4293.50, stdev=440.62, samples=4 00:24:23.468 write: IOPS=4837, BW=75.6MiB/s (79.2MB/s)(141MiB/1859msec); 0 zone resets 00:24:23.468 slat (usec): min=30, max=133, avg=33.10, stdev= 4.66 00:24:23.468 clat (usec): min=6574, max=21875, avg=11599.26, stdev=1984.00 00:24:23.468 lat (usec): min=6606, max=21906, avg=11632.36, stdev=1983.96 00:24:23.468 clat percentiles (usec): 00:24:23.468 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:24:23.468 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:24:23.468 | 70.00th=[12518], 80.00th=[13435], 90.00th=[14353], 95.00th=[15008], 00:24:23.468 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:24:23.468 | 99.99th=[21890] 00:24:23.468 bw ( KiB/s): min=65760, max=80800, per=92.58%, avg=71648.00, stdev=6973.90, samples=4 00:24:23.468 iops : min= 4110, max= 5050, avg=4478.00, stdev=435.87, samples=4 00:24:23.468 lat (msec) : 4=0.27%, 10=53.76%, 20=45.97%, 50=0.01% 00:24:23.468 cpu : usr=78.96%, sys=19.84%, ctx=47, majf=0, minf=57 00:24:23.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:23.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:23.468 issued rwts: total=16551,8992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:23.468 00:24:23.468 Run status group 0 (all jobs): 00:24:23.468 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2007-2007msec 00:24:23.468 WRITE: bw=75.6MiB/s (79.2MB/s), 75.6MiB/s-75.6MiB/s (79.2MB/s-79.2MB/s), io=141MiB (147MB), run=1859-1859msec 00:24:23.468 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.727 rmmod nvme_tcp 00:24:23.727 rmmod nvme_fabrics 00:24:23.727 rmmod nvme_keyring 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1185980 ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1185980 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1185980 ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1185980 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1185980 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1185980' 00:24:23.727 killing process with pid 1185980 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1185980 00:24:23.727 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1185980 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.986 19:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.521 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.521 00:24:26.521 real 0m12.463s 00:24:26.521 user 0m36.683s 00:24:26.521 sys 0m4.066s 00:24:26.521 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.521 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.522 ************************************ 00:24:26.522 END TEST nvmf_fio_host 00:24:26.522 ************************************ 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.522 ************************************ 00:24:26.522 START TEST nvmf_failover 00:24:26.522 ************************************ 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.522 * Looking for test storage... 00:24:26.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.522 --rc genhtml_branch_coverage=1 00:24:26.522 --rc genhtml_function_coverage=1 00:24:26.522 --rc genhtml_legend=1 00:24:26.522 --rc geninfo_all_blocks=1 00:24:26.522 --rc geninfo_unexecuted_blocks=1 00:24:26.522 00:24:26.522 ' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.522 --rc genhtml_branch_coverage=1 00:24:26.522 --rc genhtml_function_coverage=1 00:24:26.522 --rc genhtml_legend=1 00:24:26.522 --rc geninfo_all_blocks=1 00:24:26.522 --rc geninfo_unexecuted_blocks=1 00:24:26.522 00:24:26.522 ' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.522 --rc genhtml_branch_coverage=1 00:24:26.522 --rc genhtml_function_coverage=1 00:24:26.522 --rc genhtml_legend=1 00:24:26.522 --rc geninfo_all_blocks=1 00:24:26.522 --rc geninfo_unexecuted_blocks=1 00:24:26.522 00:24:26.522 ' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.522 --rc genhtml_branch_coverage=1 00:24:26.522 --rc genhtml_function_coverage=1 00:24:26.522 --rc genhtml_legend=1 00:24:26.522 --rc geninfo_all_blocks=1 00:24:26.522 --rc geninfo_unexecuted_blocks=1 00:24:26.522 00:24:26.522 ' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.522 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.523 19:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.428 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.429 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.429 19:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.429 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.687 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:24:28.688 00:24:28.688 --- 10.0.0.2 ping statistics --- 00:24:28.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.688 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:28.688 00:24:28.688 --- 10.0.0.1 ping statistics --- 00:24:28.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.688 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1188993 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1188993 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1188993 ']' 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.688 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.688 [2024-12-06 19:22:39.098796] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:28.688 [2024-12-06 19:22:39.098885] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.688 [2024-12-06 19:22:39.168070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.688 [2024-12-06 19:22:39.222250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.688 [2024-12-06 19:22:39.222299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.688 [2024-12-06 19:22:39.222327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.688 [2024-12-06 19:22:39.222338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.688 [2024-12-06 19:22:39.222347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.688 [2024-12-06 19:22:39.223745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.688 [2024-12-06 19:22:39.223808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.688 [2024-12-06 19:22:39.223811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.961 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:29.281 [2024-12-06 19:22:39.632332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.281 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.545 Malloc0 00:24:29.545 19:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.803 19:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.061 19:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.318 [2024-12-06 19:22:40.866795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.318 19:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.882 [2024-12-06 19:22:41.187802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.882 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:31.139 [2024-12-06 19:22:41.516825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1189292 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1189292 /var/tmp/bdevperf.sock 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1189292 ']' 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.139 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.397 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.397 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:31.397 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.963 NVMe0n1 00:24:31.963 19:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:32.221 00:24:32.221 19:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1189427 00:24:32.221 19:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.221 19:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:33.153 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.411 [2024-12-06 19:22:43.967788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.967995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.412 [2024-12-06 19:22:43.968552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12e00 is same with the state(6) to be set 00:24:33.670 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:36.950 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:37.208 00:24:37.208 19:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.467 [2024-12-06 19:22:47.824723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.824998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.467 [2024-12-06 19:22:47.825454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 [2024-12-06 19:22:47.825724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d138b0 is same with the state(6) to be set 00:24:37.468 19:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:40.750 19:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.750 [2024-12-06 19:22:51.112723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.750 19:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:41.700 19:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:41.958 19:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1189427 00:24:48.523 { 00:24:48.523 "results": [ 00:24:48.523 { 00:24:48.523 "job": "NVMe0n1", 00:24:48.523 "core_mask": "0x1", 00:24:48.523 "workload": "verify", 00:24:48.523 "status": "finished", 00:24:48.523 "verify_range": { 00:24:48.523 "start": 0, 00:24:48.523 "length": 16384 00:24:48.523 }, 00:24:48.523 "queue_depth": 128, 00:24:48.523 "io_size": 4096, 00:24:48.523 "runtime": 15.008787, 00:24:48.523 "iops": 8421.33344953193, 00:24:48.523 "mibps": 32.8958337872341, 00:24:48.523 "io_failed": 9773, 00:24:48.523 "io_timeout": 0, 00:24:48.523 "avg_latency_us": 14080.504367159172, 00:24:48.523 "min_latency_us": 552.2014814814814, 00:24:48.523 "max_latency_us": 16796.634074074074 00:24:48.523 } 00:24:48.523 ], 00:24:48.523 "core_count": 1 00:24:48.523 } 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1189292 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1189292 ']' 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1189292 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1189292 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1189292' 00:24:48.523 killing process with pid 1189292 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1189292 00:24:48.523 19:22:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1189292 00:24:48.523 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:48.523 [2024-12-06 19:22:41.583420] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:48.523 [2024-12-06 19:22:41.583507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189292 ] 00:24:48.523 [2024-12-06 19:22:41.649869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.523 [2024-12-06 19:22:41.708189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.523 Running I/O for 15 seconds... 00:24:48.523 8482.00 IOPS, 33.13 MiB/s [2024-12-06T18:22:59.100Z] [2024-12-06 19:22:43.969807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.969850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.969889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.969905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.969922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.969936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.969952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.969971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.969987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.970001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.970016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.970030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.970046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.970084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.523 [2024-12-06 19:22:43.970098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.523 [2024-12-06 19:22:43.970114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.970989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.524 [2024-12-06 19:22:43.971004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.524 [2024-12-06 19:22:43.971030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.525 [2024-12-06 19:22:43.971044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.525 [2024-12-06 19:22:43.971963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.525 [2024-12-06 19:22:43.971976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.971997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.526 [2024-12-06 19:22:43.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.526 [2024-12-06 19:22:43.972924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.527 [2024-12-06 19:22:43.972938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.972954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.527 [2024-12-06 19:22:43.972973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82984 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82992 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83016 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.527 [2024-12-06 19:22:43.973785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.527 [2024-12-06 19:22:43.973796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.527 [2024-12-06 19:22:43.973806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:24:48.527 [2024-12-06 19:22:43.973819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.973831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.973842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.973865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.973877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.973887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.973909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.973922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.973932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.973942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.973954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.973967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.973977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.973988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.528 [2024-12-06 19:22:43.974310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.528 [2024-12-06 19:22:43.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:24:48.528 [2024-12-06 19:22:43.974334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974400] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:48.528 [2024-12-06 19:22:43.974439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.528 [2024-12-06 19:22:43.974458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.528 [2024-12-06 19:22:43.974486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.528 [2024-12-06 19:22:43.974514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.528 [2024-12-06 19:22:43.974540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:43.974565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:48.528 [2024-12-06 19:22:43.974616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087180 (9): Bad file descriptor 00:24:48.528 [2024-12-06 19:22:43.977932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:48.528 [2024-12-06 19:22:44.007856] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:48.528 8310.00 IOPS, 32.46 MiB/s [2024-12-06T18:22:59.105Z] 8377.00 IOPS, 32.72 MiB/s [2024-12-06T18:22:59.105Z] 8442.00 IOPS, 32.98 MiB/s [2024-12-06T18:22:59.105Z] 8464.00 IOPS, 33.06 MiB/s [2024-12-06T18:22:59.105Z] [2024-12-06 19:22:47.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.528 [2024-12-06 19:22:47.827866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:47.827895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.528 [2024-12-06 19:22:47.827911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:47.827928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.528 [2024-12-06 19:22:47.827942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.528 [2024-12-06 19:22:47.827958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.528 [2024-12-06 19:22:47.827987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.529 [2024-12-06 19:22:47.828302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.529 [2024-12-06 19:22:47.828645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.529 [2024-12-06 19:22:47.828659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.828979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.828993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.530 [2024-12-06 19:22:47.829514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.530 [2024-12-06 19:22:47.829529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.531 [2024-12-06 19:22:47.829702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.531 [2024-12-06 19:22:47.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.829978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.829992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.531 [2024-12-06 19:22:47.830445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.531 [2024-12-06 19:22:47.830459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.532 [2024-12-06 19:22:47.830659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.830763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92008 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.830811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92016 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.830862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92024 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.830909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92032 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.830956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.830967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92040 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.830979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.830992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92048 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92056 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92064 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92072 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92080 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92088 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92096 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.532 [2024-12-06 19:22:47.831321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.532 [2024-12-06 19:22:47.831332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.532 [2024-12-06 19:22:47.831342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92104 len:8 PRP1 0x0 PRP2 0x0 00:24:48.532 [2024-12-06 19:22:47.831355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92112 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92120 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92136 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92144 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.831957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.831968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.831979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.831991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.832014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.832025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.832037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.832060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.832071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.832083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.832106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.832117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.832157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.832169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.832181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.533 [2024-12-06 19:22:47.832204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.533 [2024-12-06 19:22:47.832215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:24:48.533 [2024-12-06 19:22:47.832228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.533 [2024-12-06 19:22:47.832240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.534 [2024-12-06 19:22:47.832250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.534 [2024-12-06 19:22:47.832261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:24:48.534 [2024-12-06 19:22:47.832274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:47.832339] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:48.534 [2024-12-06 19:22:47.832383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.534 [2024-12-06 19:22:47.832401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:47.832415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.534 [2024-12-06 19:22:47.832429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:47.832443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.534 [2024-12-06 19:22:47.832455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:47.832469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.534 [2024-12-06 19:22:47.832482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:47.832495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:48.534 [2024-12-06 19:22:47.832549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087180 (9): Bad file descriptor 00:24:48.534 [2024-12-06 19:22:47.835799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:48.534 [2024-12-06 19:22:47.858084] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:48.534 8430.50 IOPS, 32.93 MiB/s [2024-12-06T18:22:59.111Z] 8446.71 IOPS, 32.99 MiB/s [2024-12-06T18:22:59.111Z] 8450.50 IOPS, 33.01 MiB/s [2024-12-06T18:22:59.111Z] 8445.44 IOPS, 32.99 MiB/s [2024-12-06T18:22:59.111Z] [2024-12-06 19:22:52.448610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.534 [2024-12-06 19:22:52.448696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.448962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.448992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.534 [2024-12-06 19:22:52.449365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.534 [2024-12-06 19:22:52.449378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.535 [2024-12-06 19:22:52.449952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.449982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.449997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.450011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.450026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.450040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.450055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.450074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.450090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.450104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.535 [2024-12-06 19:22:52.450119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.535 [2024-12-06 19:22:52.450133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.536 [2024-12-06 19:22:52.450161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.536 [2024-12-06 19:22:52.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.536 [2024-12-06 19:22:52.450453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.450974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.450987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.451002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.451031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.536 [2024-12-06 19:22:52.451045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.536 [2024-12-06 19:22:52.451060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.537 [2024-12-06 19:22:52.451894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.537 [2024-12-06 19:22:52.451942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:8 PRP1 0x0 PRP2 0x0 00:24:48.537 [2024-12-06 19:22:52.451956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.451974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.537 [2024-12-06 19:22:52.451990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.537 [2024-12-06 19:22:52.452003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19016 len:8 PRP1 0x0 PRP2 0x0 00:24:48.537 [2024-12-06 19:22:52.452015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.537 [2024-12-06 19:22:52.452028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19024 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19032 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19048 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19056 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19064 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19080 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19088 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19096 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19112 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19120 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18184 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18192 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18200 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18216 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.538 [2024-12-06 19:22:52.452911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18224 len:8 PRP1 0x0 PRP2 0x0 00:24:48.538 [2024-12-06 19:22:52.452923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.538 [2024-12-06 19:22:52.452936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.538 [2024-12-06 19:22:52.452947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.539 [2024-12-06 19:22:52.452957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18232 len:8 PRP1 0x0 PRP2 0x0 00:24:48.539 [2024-12-06 19:22:52.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.539 [2024-12-06 19:22:52.453039] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:48.539 [2024-12-06 19:22:52.453077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.539 [2024-12-06 19:22:52.453095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.539 [2024-12-06 19:22:52.453110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.539 [2024-12-06 19:22:52.453123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.539 [2024-12-06 19:22:52.453137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.539 [2024-12-06 19:22:52.453149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.539 [2024-12-06 19:22:52.453163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.539 [2024-12-06 19:22:52.453181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.539 [2024-12-06 19:22:52.453195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:48.539 [2024-12-06 19:22:52.456492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:48.539 [2024-12-06 19:22:52.456535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087180 (9): Bad file descriptor 00:24:48.539 [2024-12-06 19:22:52.640366] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:48.539 8298.20 IOPS, 32.41 MiB/s [2024-12-06T18:22:59.116Z] 8337.09 IOPS, 32.57 MiB/s [2024-12-06T18:22:59.116Z] 8364.67 IOPS, 32.67 MiB/s [2024-12-06T18:22:59.116Z] 8385.85 IOPS, 32.76 MiB/s [2024-12-06T18:22:59.116Z] 8404.07 IOPS, 32.83 MiB/s [2024-12-06T18:22:59.116Z] 8418.20 IOPS, 32.88 MiB/s 00:24:48.539 Latency(us) 00:24:48.539 [2024-12-06T18:22:59.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.539 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:48.539 Verification LBA range: start 0x0 length 0x4000 00:24:48.539 NVMe0n1 : 15.01 8421.33 32.90 651.15 0.00 14080.50 552.20 16796.63 00:24:48.539 [2024-12-06T18:22:59.116Z] =================================================================================================================== 00:24:48.539 [2024-12-06T18:22:59.116Z] Total : 8421.33 32.90 651.15 0.00 14080.50 552.20 16796.63 00:24:48.539 Received shutdown signal, test time was about 15.000000 seconds 00:24:48.539 00:24:48.539 Latency(us) 00:24:48.539 [2024-12-06T18:22:59.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.539 [2024-12-06T18:22:59.116Z] =================================================================================================================== 00:24:48.539 [2024-12-06T18:22:59.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1191170 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1191170 /var/tmp/bdevperf.sock 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1191170 ']' 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.539 [2024-12-06 19:22:58.638247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.539 [2024-12-06 19:22:58.907049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.539 19:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.797 NVMe0n1 00:24:48.797 19:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:49.373 00:24:49.373 19:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:49.630 00:24:49.630 19:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.630 19:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:49.887 19:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.144 19:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:53.420 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.420 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:53.420 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1191938 00:24:53.420 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:53.420 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1191938 00:24:54.793 { 00:24:54.793 "results": [ 00:24:54.793 { 00:24:54.793 "job": "NVMe0n1", 00:24:54.793 "core_mask": "0x1", 00:24:54.793 "workload": "verify", 00:24:54.793 "status": "finished", 00:24:54.793 "verify_range": { 00:24:54.793 "start": 0, 00:24:54.793 "length": 16384 00:24:54.793 }, 00:24:54.793 "queue_depth": 128, 00:24:54.793 "io_size": 4096, 00:24:54.793 "runtime": 1.020621, 00:24:54.793 "iops": 8476.21203169443, 00:24:54.793 "mibps": 33.110203248806364, 00:24:54.793 "io_failed": 0, 00:24:54.793 "io_timeout": 0, 00:24:54.793 "avg_latency_us": 15040.094924414647, 00:24:54.793 "min_latency_us": 3373.8903703703704, 00:24:54.793 "max_latency_us": 13107.2 00:24:54.793 } 00:24:54.793 ], 00:24:54.793 "core_count": 1 00:24:54.793 } 00:24:54.793 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.793 [2024-12-06 19:22:58.150407] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:54.793 [2024-12-06 19:22:58.150493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191170 ] 00:24:54.793 [2024-12-06 19:22:58.223062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.793 [2024-12-06 19:22:58.279281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.793 [2024-12-06 19:23:00.590125] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:54.793 [2024-12-06 19:23:00.590223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.793 [2024-12-06 19:23:00.590262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.793 [2024-12-06 19:23:00.590282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.793 [2024-12-06 19:23:00.590296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.793 [2024-12-06 19:23:00.590310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.793 [2024-12-06 19:23:00.590324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.793 [2024-12-06 19:23:00.590338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.793 [2024-12-06 19:23:00.590351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.793 [2024-12-06 19:23:00.590365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:54.793 [2024-12-06 19:23:00.590420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:54.793 [2024-12-06 19:23:00.590454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6180 (9): Bad file descriptor 00:24:54.793 [2024-12-06 19:23:00.594192] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:54.793 Running I/O for 1 seconds... 00:24:54.793 8396.00 IOPS, 32.80 MiB/s 00:24:54.793 Latency(us) 00:24:54.793 [2024-12-06T18:23:05.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.793 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.793 Verification LBA range: start 0x0 length 0x4000 00:24:54.793 NVMe0n1 : 1.02 8476.21 33.11 0.00 0.00 15040.09 3373.89 13107.20 00:24:54.793 [2024-12-06T18:23:05.370Z] =================================================================================================================== 00:24:54.793 [2024-12-06T18:23:05.370Z] Total : 8476.21 33.11 0.00 0.00 15040.09 3373.89 13107.20 00:24:54.793 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.793 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:54.793 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.050 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.050 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:55.615 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.615 19:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1191170 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1191170 ']' 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1191170 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.894 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1191170 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1191170' 00:24:59.153 killing process with pid 1191170 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1191170 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1191170 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:59.153 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.718 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.718 rmmod nvme_tcp 00:24:59.718 rmmod nvme_fabrics 00:24:59.718 rmmod nvme_keyring 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1188993 ']' 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1188993 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1188993 ']' 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1188993 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1188993 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1188993' 00:24:59.718 killing process with pid 1188993 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1188993 00:24:59.718 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1188993 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.977 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.880 00:25:01.880 real 0m35.850s 00:25:01.880 user 2m6.698s 00:25:01.880 sys 0m5.888s 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.880 ************************************ 00:25:01.880 END TEST nvmf_failover 00:25:01.880 ************************************ 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.880 19:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.139 ************************************ 00:25:02.139 START TEST nvmf_host_discovery 00:25:02.139 ************************************ 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.139 * Looking for test storage... 00:25:02.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:02.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.139 --rc genhtml_branch_coverage=1 00:25:02.139 --rc genhtml_function_coverage=1 00:25:02.139 --rc genhtml_legend=1 00:25:02.139 --rc geninfo_all_blocks=1 00:25:02.139 --rc geninfo_unexecuted_blocks=1 00:25:02.139 00:25:02.139 ' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:02.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.139 --rc genhtml_branch_coverage=1 00:25:02.139 --rc genhtml_function_coverage=1 00:25:02.139 --rc genhtml_legend=1 00:25:02.139 --rc geninfo_all_blocks=1 00:25:02.139 --rc geninfo_unexecuted_blocks=1 00:25:02.139 00:25:02.139 ' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:02.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.139 --rc genhtml_branch_coverage=1 00:25:02.139 --rc genhtml_function_coverage=1 00:25:02.139 --rc genhtml_legend=1 00:25:02.139 --rc geninfo_all_blocks=1 00:25:02.139 --rc geninfo_unexecuted_blocks=1 00:25:02.139 00:25:02.139 ' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:02.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.139 --rc genhtml_branch_coverage=1 00:25:02.139 --rc genhtml_function_coverage=1 00:25:02.139 --rc genhtml_legend=1 00:25:02.139 --rc geninfo_all_blocks=1 00:25:02.139 --rc geninfo_unexecuted_blocks=1 00:25:02.139 00:25:02.139 ' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.139 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.140 19:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:04.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:04.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.045 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:04.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:04.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.046 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:04.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:25:04.305 00:25:04.305 --- 10.0.0.2 ping statistics --- 00:25:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.305 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:25:04.305 00:25:04.305 --- 10.0.0.1 ping statistics --- 00:25:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.305 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1194550 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1194550 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1194550 ']' 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.305 19:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.305 [2024-12-06 19:23:14.805949] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:04.305 [2024-12-06 19:23:14.806040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.305 [2024-12-06 19:23:14.876112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.564 [2024-12-06 19:23:14.933167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.564 [2024-12-06 19:23:14.933240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.564 [2024-12-06 19:23:14.933255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.564 [2024-12-06 19:23:14.933267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.564 [2024-12-06 19:23:14.933278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.564 [2024-12-06 19:23:14.933938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 [2024-12-06 19:23:15.083516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 [2024-12-06 19:23:15.091723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 null0 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 null1 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1194615 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1194615 /tmp/host.sock 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1194615 ']' 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:04.564 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.564 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.823 [2024-12-06 19:23:15.167582] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:04.823 [2024-12-06 19:23:15.167677] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194615 ] 00:25:04.823 [2024-12-06 19:23:15.232661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.823 [2024-12-06 19:23:15.289295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.081 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 [2024-12-06 19:23:15.749483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:05.340 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.341 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.598 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:05.598 19:23:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:06.163 [2024-12-06 19:23:16.511313] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.163 [2024-12-06 19:23:16.511337] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.163 [2024-12-06 19:23:16.511360] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.163 [2024-12-06 19:23:16.639791] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:06.163 [2024-12-06 19:23:16.739594] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:06.421 [2024-12-06 19:23:16.740614] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d3caa0:1 started. 00:25:06.421 [2024-12-06 19:23:16.742521] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:06.421 [2024-12-06 19:23:16.742544] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:06.421 [2024-12-06 19:23:16.749503] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d3caa0 was disconnected and freed. delete nvme_qpair. 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.421 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.422 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.422 19:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.679 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.938 [2024-12-06 19:23:17.316754] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d3cc80:1 started. 00:25:06.938 [2024-12-06 19:23:17.320734] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d3cc80 was disconnected and freed. delete nvme_qpair. 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.938 [2024-12-06 19:23:17.382468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.938 [2024-12-06 19:23:17.383310] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:06.938 [2024-12-06 19:23:17.383363] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.938 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.939 [2024-12-06 19:23:17.510198] bdev_nvme.c:7434:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:06.939 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:07.504 [2024-12-06 19:23:17.778723] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:07.504 [2024-12-06 19:23:17.778805] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.504 [2024-12-06 19:23:17.778824] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.504 [2024-12-06 19:23:17.778833] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.198 [2024-12-06 19:23:18.598989] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:08.198 [2024-12-06 19:23:18.599029] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.198 [2024-12-06 19:23:18.601368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.198 [2024-12-06 19:23:18.601402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.198 [2024-12-06 19:23:18.601439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.198 [2024-12-06 19:23:18.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.198 [2024-12-06 19:23:18.601468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.198 [2024-12-06 19:23:18.601495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.198 [2024-12-06 19:23:18.601510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.198 [2024-12-06 19:23:18.601523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.198 [2024-12-06 19:23:18.601536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.198 [2024-12-06 19:23:18.611356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.198 [2024-12-06 19:23:18.621409] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.198 [2024-12-06 19:23:18.621432] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.198 [2024-12-06 19:23:18.621447] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.621456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.198 [2024-12-06 19:23:18.621507] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.621776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.198 [2024-12-06 19:23:18.621807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.198 [2024-12-06 19:23:18.621824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.198 [2024-12-06 19:23:18.621847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.198 [2024-12-06 19:23:18.621876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.198 [2024-12-06 19:23:18.621892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.198 [2024-12-06 19:23:18.621910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.198 [2024-12-06 19:23:18.621923] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.198 [2024-12-06 19:23:18.621934] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.198 [2024-12-06 19:23:18.621942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.198 [2024-12-06 19:23:18.631539] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.198 [2024-12-06 19:23:18.631560] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.198 [2024-12-06 19:23:18.631569] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.631576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.198 [2024-12-06 19:23:18.631615] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.631781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.198 [2024-12-06 19:23:18.631809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.198 [2024-12-06 19:23:18.631826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.198 [2024-12-06 19:23:18.631848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.198 [2024-12-06 19:23:18.631868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.198 [2024-12-06 19:23:18.631882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.198 [2024-12-06 19:23:18.631895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.198 [2024-12-06 19:23:18.631908] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.198 [2024-12-06 19:23:18.631917] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.198 [2024-12-06 19:23:18.631925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.198 [2024-12-06 19:23:18.641672] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.198 [2024-12-06 19:23:18.641697] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.198 [2024-12-06 19:23:18.641707] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.641730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.198 [2024-12-06 19:23:18.641757] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.198 [2024-12-06 19:23:18.641887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.198 [2024-12-06 19:23:18.641915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.198 [2024-12-06 19:23:18.641931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.198 [2024-12-06 19:23:18.641958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.198 [2024-12-06 19:23:18.641980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.198 [2024-12-06 19:23:18.641993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.198 [2024-12-06 19:23:18.642006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.198 [2024-12-06 19:23:18.642019] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.198 [2024-12-06 19:23:18.642033] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.198 [2024-12-06 19:23:18.642041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.198 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.198 [2024-12-06 19:23:18.651792] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.198 [2024-12-06 19:23:18.651819] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.199 [2024-12-06 19:23:18.651829] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.651837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.199 [2024-12-06 19:23:18.651864] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.652005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.199 [2024-12-06 19:23:18.652032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.199 [2024-12-06 19:23:18.652048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.199 [2024-12-06 19:23:18.652069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.199 [2024-12-06 19:23:18.652089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.199 [2024-12-06 19:23:18.652102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.199 [2024-12-06 19:23:18.652120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.199 [2024-12-06 19:23:18.652133] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.199 [2024-12-06 19:23:18.652142] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.199 [2024-12-06 19:23:18.652149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.199 [2024-12-06 19:23:18.661899] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.199 [2024-12-06 19:23:18.661922] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.199 [2024-12-06 19:23:18.661932] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.661940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.199 [2024-12-06 19:23:18.661981] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.662126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.199 [2024-12-06 19:23:18.662153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.199 [2024-12-06 19:23:18.662170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.199 [2024-12-06 19:23:18.662191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.199 [2024-12-06 19:23:18.662211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.199 [2024-12-06 19:23:18.662225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.199 [2024-12-06 19:23:18.662238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.199 [2024-12-06 19:23:18.662250] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.199 [2024-12-06 19:23:18.662259] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.199 [2024-12-06 19:23:18.662267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.199 [2024-12-06 19:23:18.672029] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.199 [2024-12-06 19:23:18.672049] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.199 [2024-12-06 19:23:18.672058] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.672065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.199 [2024-12-06 19:23:18.672103] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.672257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.199 [2024-12-06 19:23:18.672284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.199 [2024-12-06 19:23:18.672300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.199 [2024-12-06 19:23:18.672321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.199 [2024-12-06 19:23:18.672353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.199 [2024-12-06 19:23:18.672371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.199 [2024-12-06 19:23:18.672392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.199 [2024-12-06 19:23:18.672405] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.199 [2024-12-06 19:23:18.672414] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.199 [2024-12-06 19:23:18.672421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.199 [2024-12-06 19:23:18.682136] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:08.199 [2024-12-06 19:23:18.682156] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:08.199 [2024-12-06 19:23:18.682164] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.682171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:08.199 [2024-12-06 19:23:18.682208] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.199 [2024-12-06 19:23:18.682384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.199 [2024-12-06 19:23:18.682412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0d050 with addr=10.0.0.2, port=4420 00:25:08.199 [2024-12-06 19:23:18.682428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d050 is same with the state(6) to be set 00:25:08.199 [2024-12-06 19:23:18.682450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d050 (9): Bad file descriptor 00:25:08.199 [2024-12-06 19:23:18.682495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.199 [2024-12-06 19:23:18.682514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.199 [2024-12-06 19:23:18.682527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.199 [2024-12-06 19:23:18.682539] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.199 [2024-12-06 19:23:18.682549] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.199 [2024-12-06 19:23:18.682557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.199 [2024-12-06 19:23:18.685488] bdev_nvme.c:7297:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:08.199 [2024-12-06 19:23:18.685517] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.199 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.481 19:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.412 [2024-12-06 19:23:19.919556] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.412 [2024-12-06 19:23:19.919595] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.412 [2024-12-06 19:23:19.919621] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.671 [2024-12-06 19:23:20.007877] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:09.671 [2024-12-06 19:23:20.072558] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:09.671 [2024-12-06 19:23:20.073545] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1e73e40:1 started. 00:25:09.671 [2024-12-06 19:23:20.075842] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.671 [2024-12-06 19:23:20.075888] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.671 [2024-12-06 19:23:20.078564] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1e73e40 was disconnected and freed. delete nvme_qpair. 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.671 request: 00:25:09.671 { 00:25:09.671 "name": "nvme", 00:25:09.671 "trtype": "tcp", 00:25:09.671 "traddr": "10.0.0.2", 00:25:09.671 "adrfam": "ipv4", 00:25:09.671 "trsvcid": "8009", 00:25:09.671 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.671 "wait_for_attach": true, 00:25:09.671 "method": "bdev_nvme_start_discovery", 00:25:09.671 "req_id": 1 00:25:09.671 } 00:25:09.671 Got JSON-RPC error response 00:25:09.671 response: 00:25:09.671 { 00:25:09.671 "code": -17, 00:25:09.671 "message": "File exists" 00:25:09.671 } 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.671 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.672 request: 00:25:09.672 { 00:25:09.672 "name": "nvme_second", 00:25:09.672 "trtype": "tcp", 00:25:09.672 "traddr": "10.0.0.2", 00:25:09.672 "adrfam": "ipv4", 00:25:09.672 "trsvcid": "8009", 00:25:09.672 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.672 "wait_for_attach": true, 00:25:09.672 "method": "bdev_nvme_start_discovery", 00:25:09.672 "req_id": 1 00:25:09.672 } 00:25:09.672 Got JSON-RPC error response 00:25:09.672 response: 00:25:09.672 { 00:25:09.672 "code": -17, 00:25:09.672 "message": "File exists" 00:25:09.672 } 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.672 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.930 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.864 [2024-12-06 19:23:21.299342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.864 [2024-12-06 19:23:21.299409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75000 with addr=10.0.0.2, port=8010 00:25:10.864 [2024-12-06 19:23:21.299445] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:10.864 [2024-12-06 19:23:21.299461] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:10.864 [2024-12-06 19:23:21.299475] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:11.799 [2024-12-06 19:23:22.301906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.799 [2024-12-06 19:23:22.301971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48350 with addr=10.0.0.2, port=8010 00:25:11.799 [2024-12-06 19:23:22.302008] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:11.799 [2024-12-06 19:23:22.302024] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:11.799 [2024-12-06 19:23:22.302038] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:12.733 [2024-12-06 19:23:23.303984] bdev_nvme.c:7553:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:12.733 request: 00:25:12.733 { 00:25:12.733 "name": "nvme_second", 00:25:12.733 "trtype": "tcp", 00:25:12.733 "traddr": "10.0.0.2", 00:25:12.733 "adrfam": "ipv4", 00:25:12.733 "trsvcid": "8010", 00:25:12.733 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:12.733 "wait_for_attach": false, 00:25:12.733 "attach_timeout_ms": 3000, 00:25:12.733 "method": "bdev_nvme_start_discovery", 00:25:12.733 "req_id": 1 00:25:12.733 } 00:25:12.733 Got JSON-RPC error response 00:25:12.733 response: 00:25:12.733 { 00:25:12.733 "code": -110, 00:25:12.733 "message": "Connection timed out" 00:25:12.733 } 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:12.733 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1194615 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.992 rmmod nvme_tcp 00:25:12.992 rmmod nvme_fabrics 00:25:12.992 rmmod nvme_keyring 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1194550 ']' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1194550 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1194550 ']' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1194550 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194550 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194550' 00:25:12.992 killing process with pid 1194550 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1194550 00:25:12.992 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1194550 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.250 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.789 00:25:15.789 real 0m13.286s 00:25:15.789 user 0m19.211s 00:25:15.789 sys 0m2.900s 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.789 ************************************ 00:25:15.789 END TEST nvmf_host_discovery 00:25:15.789 ************************************ 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.789 ************************************ 00:25:15.789 START TEST nvmf_host_multipath_status 00:25:15.789 ************************************ 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:15.789 * Looking for test storage... 00:25:15.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.789 --rc genhtml_branch_coverage=1 00:25:15.789 --rc genhtml_function_coverage=1 00:25:15.789 --rc genhtml_legend=1 00:25:15.789 --rc geninfo_all_blocks=1 00:25:15.789 --rc geninfo_unexecuted_blocks=1 00:25:15.789 00:25:15.789 ' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.789 --rc genhtml_branch_coverage=1 00:25:15.789 --rc genhtml_function_coverage=1 00:25:15.789 --rc genhtml_legend=1 00:25:15.789 --rc geninfo_all_blocks=1 00:25:15.789 --rc geninfo_unexecuted_blocks=1 00:25:15.789 00:25:15.789 ' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.789 --rc genhtml_branch_coverage=1 00:25:15.789 --rc genhtml_function_coverage=1 00:25:15.789 --rc genhtml_legend=1 00:25:15.789 --rc geninfo_all_blocks=1 00:25:15.789 --rc geninfo_unexecuted_blocks=1 00:25:15.789 00:25:15.789 ' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.789 --rc genhtml_branch_coverage=1 00:25:15.789 --rc genhtml_function_coverage=1 00:25:15.789 --rc genhtml_legend=1 00:25:15.789 --rc geninfo_all_blocks=1 00:25:15.789 --rc geninfo_unexecuted_blocks=1 00:25:15.789 00:25:15.789 ' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.789 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.790 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.692 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.693 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:17.952 00:25:17.952 --- 10.0.0.2 ping statistics --- 00:25:17.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.952 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:25:17.952 00:25:17.952 --- 10.0.0.1 ping statistics --- 00:25:17.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.952 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1197735 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1197735 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1197735 ']' 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.952 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.952 [2024-12-06 19:23:28.488303] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:17.952 [2024-12-06 19:23:28.488372] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.210 [2024-12-06 19:23:28.557651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:18.210 [2024-12-06 19:23:28.611744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.211 [2024-12-06 19:23:28.611811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.211 [2024-12-06 19:23:28.611839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.211 [2024-12-06 19:23:28.611850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.211 [2024-12-06 19:23:28.611859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.211 [2024-12-06 19:23:28.613372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.211 [2024-12-06 19:23:28.613378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1197735 00:25:18.211 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:18.468 [2024-12-06 19:23:29.031983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.726 19:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:18.985 Malloc0 00:25:18.985 19:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:19.243 19:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.501 19:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.758 [2024-12-06 19:23:30.129021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.758 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.016 [2024-12-06 19:23:30.405756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1198020 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1198020 /var/tmp/bdevperf.sock 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1198020 ']' 00:25:20.016 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.017 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.017 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.017 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.017 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.274 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.274 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:20.274 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:20.532 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:21.095 Nvme0n1 00:25:21.096 19:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:21.660 Nvme0n1 00:25:21.660 19:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:21.660 19:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:23.558 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:23.558 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:23.816 19:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.074 19:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:25.007 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:25.007 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:25.007 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.007 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.572 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.572 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:25.572 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.572 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.572 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.572 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.572 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.572 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.829 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.829 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.829 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.829 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.085 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.342 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.342 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.342 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.600 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.600 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.600 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.600 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.858 19:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.858 19:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:26.858 19:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.115 19:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.402 19:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:28.335 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:28.335 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.335 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.335 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.593 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.593 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.593 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.593 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.849 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.849 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.849 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.849 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.107 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.107 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.107 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.107 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.365 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.365 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.365 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.365 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:29.931 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.189 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:30.755 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:31.689 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:31.689 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.689 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.689 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.947 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.947 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.947 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.947 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.205 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.205 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.205 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.205 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.463 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.463 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.463 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.463 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.721 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.721 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.721 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.721 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.979 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.979 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.979 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.979 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.237 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.237 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:33.237 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.495 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:33.753 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:34.703 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:34.703 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.703 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.703 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.960 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.960 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:34.960 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.960 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.525 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.525 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.525 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.525 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.525 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.525 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.525 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.525 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.783 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.783 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.783 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.783 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.041 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.041 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:36.041 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.041 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.299 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.299 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:36.299 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:36.866 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:36.866 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.239 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.497 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.497 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.497 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.497 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.755 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.755 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.755 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.756 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.013 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.013 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:39.013 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.013 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.271 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.271 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:39.271 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.271 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.563 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.563 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:39.563 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:39.845 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.107 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:41.040 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:41.040 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.040 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.040 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.298 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.298 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:41.298 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.298 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.865 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.431 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.689 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.689 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:42.947 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:42.947 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:43.512 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.512 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.884 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.142 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.142 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.142 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.142 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.399 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.399 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.399 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.399 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.658 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.658 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.658 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.658 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.916 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.916 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.916 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.916 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.174 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.174 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:46.174 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.431 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.996 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:47.941 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:47.941 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.941 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.941 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.198 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.198 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.198 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.198 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.455 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.455 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.455 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.455 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.712 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.712 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.712 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.712 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.969 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.969 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.969 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.969 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.227 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.227 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.227 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.227 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.484 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.484 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:49.484 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:49.741 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:49.998 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.371 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.629 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.629 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.629 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.629 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.887 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.887 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.887 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.887 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.145 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.145 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.145 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.145 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.402 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.402 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:52.402 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.402 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.968 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.968 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:52.968 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:52.968 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.539 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:54.474 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:54.474 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.474 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.474 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.742 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.742 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.742 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.742 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.999 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.999 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.999 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.999 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.257 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.257 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.257 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.257 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.516 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.516 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.516 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.516 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.774 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.774 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.774 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.774 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1198020 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1198020 ']' 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1198020 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198020 00:25:56.032 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:56.033 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:56.033 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198020' 00:25:56.033 killing process with pid 1198020 00:25:56.033 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1198020 00:25:56.033 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1198020 00:25:56.033 { 00:25:56.033 "results": [ 00:25:56.033 { 00:25:56.033 "job": "Nvme0n1", 00:25:56.033 "core_mask": "0x4", 00:25:56.033 "workload": "verify", 00:25:56.033 "status": "terminated", 00:25:56.033 "verify_range": { 00:25:56.033 "start": 0, 00:25:56.033 "length": 16384 00:25:56.033 }, 00:25:56.033 "queue_depth": 128, 00:25:56.033 "io_size": 4096, 00:25:56.033 "runtime": 34.393182, 00:25:56.033 "iops": 8033.627129935229, 00:25:56.033 "mibps": 31.38135597630949, 00:25:56.033 "io_failed": 0, 00:25:56.033 "io_timeout": 0, 00:25:56.033 "avg_latency_us": 15905.182268355318, 00:25:56.033 "min_latency_us": 952.6992592592593, 00:25:56.033 "max_latency_us": 4026531.84 00:25:56.033 } 00:25:56.033 ], 00:25:56.033 "core_count": 1 00:25:56.033 } 00:25:56.305 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1198020 00:25:56.305 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.305 [2024-12-06 19:23:30.472694] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:56.305 [2024-12-06 19:23:30.472798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198020 ] 00:25:56.305 [2024-12-06 19:23:30.543634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.305 [2024-12-06 19:23:30.605353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.305 Running I/O for 90 seconds... 00:25:56.305 8611.00 IOPS, 33.64 MiB/s [2024-12-06T18:24:06.882Z] 8644.00 IOPS, 33.77 MiB/s [2024-12-06T18:24:06.882Z] 8601.33 IOPS, 33.60 MiB/s [2024-12-06T18:24:06.882Z] 8610.00 IOPS, 33.63 MiB/s [2024-12-06T18:24:06.882Z] 8567.20 IOPS, 33.47 MiB/s [2024-12-06T18:24:06.882Z] 8598.83 IOPS, 33.59 MiB/s [2024-12-06T18:24:06.882Z] 8592.14 IOPS, 33.56 MiB/s [2024-12-06T18:24:06.882Z] 8588.75 IOPS, 33.55 MiB/s [2024-12-06T18:24:06.882Z] 8581.11 IOPS, 33.52 MiB/s [2024-12-06T18:24:06.882Z] 8567.50 IOPS, 33.47 MiB/s [2024-12-06T18:24:06.882Z] 8566.82 IOPS, 33.46 MiB/s [2024-12-06T18:24:06.882Z] 8568.75 IOPS, 33.47 MiB/s [2024-12-06T18:24:06.882Z] 8555.00 IOPS, 33.42 MiB/s [2024-12-06T18:24:06.882Z] 8542.86 IOPS, 33.37 MiB/s [2024-12-06T18:24:06.882Z] [2024-12-06 19:23:47.140355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.305 [2024-12-06 19:23:47.140424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.305 [2024-12-06 19:23:47.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-12-06 19:23:47.140812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.305 [2024-12-06 19:23:47.140846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.140865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.140887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.140904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.140942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.140964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.140980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.141967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-12-06 19:23:47.141983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.306 [2024-12-06 19:23:47.142004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.142734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-12-06 19:23:47.143185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.307 [2024-12-06 19:23:47.143580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.307 [2024-12-06 19:23:47.143606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.143963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.143980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.308 [2024-12-06 19:23:47.144834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.308 [2024-12-06 19:23:47.144851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.144876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.144933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.144958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.144974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.144999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.145221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.145458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-12-06 19:23:47.147214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.309 [2024-12-06 19:23:47.147760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.309 [2024-12-06 19:23:47.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:23:47.147805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:23:47.147821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:23:47.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:23:47.147866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:23:47.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:23:47.147910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:23:47.147939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:23:47.147956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.310 8535.87 IOPS, 33.34 MiB/s [2024-12-06T18:24:06.887Z] 8002.38 IOPS, 31.26 MiB/s [2024-12-06T18:24:06.887Z] 7531.65 IOPS, 29.42 MiB/s [2024-12-06T18:24:06.887Z] 7113.22 IOPS, 27.79 MiB/s [2024-12-06T18:24:06.887Z] 6738.84 IOPS, 26.32 MiB/s [2024-12-06T18:24:06.887Z] 6837.40 IOPS, 26.71 MiB/s [2024-12-06T18:24:06.887Z] 6918.05 IOPS, 27.02 MiB/s [2024-12-06T18:24:06.887Z] 7030.05 IOPS, 27.46 MiB/s [2024-12-06T18:24:06.887Z] 7204.52 IOPS, 28.14 MiB/s [2024-12-06T18:24:06.887Z] 7374.21 IOPS, 28.81 MiB/s [2024-12-06T18:24:06.887Z] 7505.52 IOPS, 29.32 MiB/s [2024-12-06T18:24:06.887Z] 7549.42 IOPS, 29.49 MiB/s [2024-12-06T18:24:06.887Z] 7587.22 IOPS, 29.64 MiB/s [2024-12-06T18:24:06.887Z] 7622.18 IOPS, 29.77 MiB/s [2024-12-06T18:24:06.887Z] 7704.10 IOPS, 30.09 MiB/s [2024-12-06T18:24:06.887Z] 7821.50 IOPS, 30.55 MiB/s [2024-12-06T18:24:06.887Z] 7927.74 IOPS, 30.97 MiB/s [2024-12-06T18:24:06.887Z] [2024-12-06 19:24:03.804567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.804974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.310 [2024-12-06 19:24:03.805606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.310 [2024-12-06 19:24:03.805627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.805649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.805747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.805791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.805867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.805889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.805909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-12-06 19:24:03.807692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.311 [2024-12-06 19:24:03.807950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.311 [2024-12-06 19:24:03.807965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.807998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.808013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.808051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.808328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.808360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.808377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.809957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-12-06 19:24:03.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.810977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.810993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.811015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.811031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.811052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.312 [2024-12-06 19:24:03.811070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.312 [2024-12-06 19:24:03.811093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.811108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.811966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.811981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.812506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.812644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.814029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.313 [2024-12-06 19:24:03.814265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.814302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.814339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.313 [2024-12-06 19:24:03.814396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-12-06 19:24:03.814412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.814801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.814969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.814984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.815022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.815059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.815100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.815138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.815175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.815212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.815249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.815286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.815308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.815324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.817968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-12-06 19:24:03.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.314 [2024-12-06 19:24:03.818269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.314 [2024-12-06 19:24:03.818284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.818899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.818973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.819010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.819032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.819047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.820030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.820074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.820112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.315 [2024-12-06 19:24:03.820423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.820460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.315 [2024-12-06 19:24:03.820497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.315 [2024-12-06 19:24:03.820519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.820535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.821862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.821994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.822009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.822030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.822046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.823275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.823334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.823571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.316 [2024-12-06 19:24:03.823734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.316 [2024-12-06 19:24:03.823772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.316 [2024-12-06 19:24:03.823794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.823810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.823832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.823848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.823869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.823884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.823906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.823948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.823964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.824018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.824054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.824164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.824201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.824239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.824312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.824334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.824349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.825560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.825597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.825634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.825888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.825904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.826462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.826588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.826626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.826713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.317 [2024-12-06 19:24:03.826750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.317 [2024-12-06 19:24:03.826887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.317 [2024-12-06 19:24:03.826903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.826925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.826962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.826977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.827020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.827057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.827094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.827173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.827195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.827227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.828855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.828877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.828893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.829717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.829942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.829958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.831069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.318 [2024-12-06 19:24:03.831114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.831152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.831190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.318 [2024-12-06 19:24:03.831227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.318 [2024-12-06 19:24:03.831249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.831805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.831954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.831991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.832007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.832028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.832043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.832064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.832078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.832114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.834805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.834952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.834989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.835005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.835026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.319 [2024-12-06 19:24:03.835046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.319 [2024-12-06 19:24:03.835068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.319 [2024-12-06 19:24:03.835083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.835602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.835625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.835641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.836803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.836848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.836887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.836925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.836962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.836984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.837237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.837274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.837311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.320 [2024-12-06 19:24:03.837498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.320 [2024-12-06 19:24:03.837550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.320 [2024-12-06 19:24:03.837571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.837586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.837630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.837647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.838441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.838485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.838594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.838770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.838866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.838887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.839688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.839727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.839764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.839918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.839993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.840008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.840046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.840082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.840117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.840153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.321 [2024-12-06 19:24:03.840189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.840225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.840984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.321 [2024-12-06 19:24:03.841056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.321 [2024-12-06 19:24:03.841073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.841247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.841423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.841439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.842949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.842985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.843512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.843569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.843584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.845480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.845525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.322 [2024-12-06 19:24:03.845575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.845615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.845652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.322 [2024-12-06 19:24:03.845683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.322 [2024-12-06 19:24:03.845702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.845777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.845813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.845850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.845887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.845924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.845960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.845998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.846220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.846368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.846406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.846443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.846479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.846517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.847269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.847313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.847351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.847388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.847426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.847463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.847485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.847501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.848943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.848969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.849019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.849057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.849095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.849131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.323 [2024-12-06 19:24:03.849179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.849216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.849253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.849291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.323 [2024-12-06 19:24:03.849328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.323 [2024-12-06 19:24:03.849349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.849902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.849977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.850130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.850780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.850863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.850977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.851037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.851052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.851074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.851090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.851118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-12-06 19:24:03.851135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.852621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.852646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.324 [2024-12-06 19:24:03.852682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.324 [2024-12-06 19:24:03.852702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.852741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.852964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.852985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.853709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.853731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.853747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.855085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.855135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.855174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-12-06 19:24:03.855442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.325 [2024-12-06 19:24:03.855479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.325 [2024-12-06 19:24:03.855501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.855516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.855758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.855912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.855949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.855971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.855986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.856023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.856061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.856098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.856135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.856172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.856209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.856231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.856248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.858635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.858691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.858731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.858990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.859118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.859155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-12-06 19:24:03.859319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.326 [2024-12-06 19:24:03.859355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.326 [2024-12-06 19:24:03.859377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.859921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.859980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.859996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.860017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.860048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.860070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.860089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.860111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.860126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.860151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.860167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.327 [2024-12-06 19:24:03.862637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.327 [2024-12-06 19:24:03.862683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.327 [2024-12-06 19:24:03.862703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.327 7990.75 IOPS, 31.21 MiB/s [2024-12-06T18:24:06.904Z] 8010.15 IOPS, 31.29 MiB/s [2024-12-06T18:24:06.904Z] 8027.71 IOPS, 31.36 MiB/s [2024-12-06T18:24:06.904Z] Received shutdown signal, test time was about 34.393943 seconds 00:25:56.327 00:25:56.327 Latency(us) 00:25:56.327 [2024-12-06T18:24:06.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.327 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:56.327 Verification LBA range: start 0x0 length 0x4000 00:25:56.327 Nvme0n1 : 34.39 8033.63 31.38 0.00 0.00 15905.18 952.70 4026531.84 00:25:56.327 [2024-12-06T18:24:06.904Z] =================================================================================================================== 00:25:56.327 [2024-12-06T18:24:06.904Z] Total : 8033.63 31.38 0.00 0.00 15905.18 952.70 4026531.84 00:25:56.327 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.586 rmmod nvme_tcp 00:25:56.586 rmmod nvme_fabrics 00:25:56.586 rmmod nvme_keyring 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:56.586 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:56.587 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1197735 ']' 00:25:56.587 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1197735 00:25:56.587 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1197735 ']' 00:25:56.587 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1197735 00:25:56.587 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197735 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197735' 00:25:56.845 killing process with pid 1197735 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1197735 00:25:56.845 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1197735 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.105 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.030 00:25:59.030 real 0m43.680s 00:25:59.030 user 2m12.424s 00:25:59.030 sys 0m10.893s 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.030 ************************************ 00:25:59.030 END TEST nvmf_host_multipath_status 00:25:59.030 ************************************ 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.030 ************************************ 00:25:59.030 START TEST nvmf_discovery_remove_ifc 00:25:59.030 ************************************ 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:59.030 * Looking for test storage... 00:25:59.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:59.030 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.289 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:59.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.290 --rc genhtml_branch_coverage=1 00:25:59.290 --rc genhtml_function_coverage=1 00:25:59.290 --rc genhtml_legend=1 00:25:59.290 --rc geninfo_all_blocks=1 00:25:59.290 --rc geninfo_unexecuted_blocks=1 00:25:59.290 00:25:59.290 ' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:59.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.290 --rc genhtml_branch_coverage=1 00:25:59.290 --rc genhtml_function_coverage=1 00:25:59.290 --rc genhtml_legend=1 00:25:59.290 --rc geninfo_all_blocks=1 00:25:59.290 --rc geninfo_unexecuted_blocks=1 00:25:59.290 00:25:59.290 ' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:59.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.290 --rc genhtml_branch_coverage=1 00:25:59.290 --rc genhtml_function_coverage=1 00:25:59.290 --rc genhtml_legend=1 00:25:59.290 --rc geninfo_all_blocks=1 00:25:59.290 --rc geninfo_unexecuted_blocks=1 00:25:59.290 00:25:59.290 ' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:59.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.290 --rc genhtml_branch_coverage=1 00:25:59.290 --rc genhtml_function_coverage=1 00:25:59.290 --rc genhtml_legend=1 00:25:59.290 --rc geninfo_all_blocks=1 00:25:59.290 --rc geninfo_unexecuted_blocks=1 00:25:59.290 00:25:59.290 ' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.290 19:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.198 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:01.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:01.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:01.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:01.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.199 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:26:01.458 00:26:01.458 --- 10.0.0.2 ping statistics --- 00:26:01.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.458 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:26:01.458 00:26:01.458 --- 10.0.0.1 ping statistics --- 00:26:01.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.458 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1205109 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1205109 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1205109 ']' 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.458 19:24:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.458 [2024-12-06 19:24:11.973260] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:26:01.458 [2024-12-06 19:24:11.973345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.717 [2024-12-06 19:24:12.046232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.717 [2024-12-06 19:24:12.099438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.717 [2024-12-06 19:24:12.099516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.717 [2024-12-06 19:24:12.099544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.717 [2024-12-06 19:24:12.099556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.717 [2024-12-06 19:24:12.099566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.717 [2024-12-06 19:24:12.100194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.717 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.717 [2024-12-06 19:24:12.247875] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.717 [2024-12-06 19:24:12.256095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:01.717 null0 00:26:01.717 [2024-12-06 19:24:12.287999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1205128 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1205128 /tmp/host.sock 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1205128 ']' 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:01.976 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.976 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.976 [2024-12-06 19:24:12.356582] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:26:01.976 [2024-12-06 19:24:12.356680] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205128 ] 00:26:01.976 [2024-12-06 19:24:12.422489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.976 [2024-12-06 19:24:12.479539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.235 19:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.171 [2024-12-06 19:24:13.725329] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.171 [2024-12-06 19:24:13.725365] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.171 [2024-12-06 19:24:13.725388] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.430 [2024-12-06 19:24:13.852839] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:03.430 [2024-12-06 19:24:13.955771] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:03.430 [2024-12-06 19:24:13.956827] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2164510:1 started. 00:26:03.430 [2024-12-06 19:24:13.958524] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:03.430 [2024-12-06 19:24:13.958584] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:03.430 [2024-12-06 19:24:13.958623] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:03.430 [2024-12-06 19:24:13.958662] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:03.430 [2024-12-06 19:24:13.958704] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.430 [2024-12-06 19:24:13.964008] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2164510 was disconnected and freed. delete nvme_qpair. 00:26:03.430 19:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.430 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:03.430 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.689 19:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.636 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.636 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.637 19:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.012 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.945 19:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.877 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.877 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.877 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.878 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.810 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.068 [2024-12-06 19:24:19.399722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:09.068 [2024-12-06 19:24:19.399803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.068 [2024-12-06 19:24:19.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.068 [2024-12-06 19:24:19.399846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.068 [2024-12-06 19:24:19.399867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.068 [2024-12-06 19:24:19.399881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.068 [2024-12-06 19:24:19.399894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.068 [2024-12-06 19:24:19.399907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.068 [2024-12-06 19:24:19.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.068 [2024-12-06 19:24:19.399935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.068 [2024-12-06 19:24:19.399948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.068 [2024-12-06 19:24:19.399961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2140d50 is same with the state(6) to be set 00:26:09.068 [2024-12-06 19:24:19.409741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2140d50 (9): Bad file descriptor 00:26:09.068 [2024-12-06 19:24:19.419782] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:09.069 [2024-12-06 19:24:19.419804] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:09.069 [2024-12-06 19:24:19.419818] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:09.069 [2024-12-06 19:24:19.419828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:09.069 [2024-12-06 19:24:19.419885] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.003 [2024-12-06 19:24:20.456709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:10.003 [2024-12-06 19:24:20.456787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2140d50 with addr=10.0.0.2, port=4420 00:26:10.003 [2024-12-06 19:24:20.456817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2140d50 is same with the state(6) to be set 00:26:10.003 [2024-12-06 19:24:20.456863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2140d50 (9): Bad file descriptor 00:26:10.003 [2024-12-06 19:24:20.457372] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:10.003 [2024-12-06 19:24:20.457420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:10.003 [2024-12-06 19:24:20.457438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:10.003 [2024-12-06 19:24:20.457455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:10.003 [2024-12-06 19:24:20.457470] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:10.003 [2024-12-06 19:24:20.457490] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:10.003 [2024-12-06 19:24:20.457499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:10.003 [2024-12-06 19:24:20.457515] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:10.003 [2024-12-06 19:24:20.457525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.003 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.010 [2024-12-06 19:24:21.460025] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:11.010 [2024-12-06 19:24:21.460065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:11.010 [2024-12-06 19:24:21.460088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:11.010 [2024-12-06 19:24:21.460116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:11.010 [2024-12-06 19:24:21.460131] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:11.010 [2024-12-06 19:24:21.460143] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:11.010 [2024-12-06 19:24:21.460153] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:11.010 [2024-12-06 19:24:21.460161] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:11.010 [2024-12-06 19:24:21.460212] bdev_nvme.c:7261:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:11.010 [2024-12-06 19:24:21.460268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.010 [2024-12-06 19:24:21.460291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.010 [2024-12-06 19:24:21.460312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.010 [2024-12-06 19:24:21.460326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.010 [2024-12-06 19:24:21.460339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.010 [2024-12-06 19:24:21.460352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.010 [2024-12-06 19:24:21.460365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.010 [2024-12-06 19:24:21.460378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.010 [2024-12-06 19:24:21.460392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.010 [2024-12-06 19:24:21.460404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.010 [2024-12-06 19:24:21.460417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:11.010 [2024-12-06 19:24:21.460474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21304a0 (9): Bad file descriptor 00:26:11.010 [2024-12-06 19:24:21.461462] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:11.010 [2024-12-06 19:24:21.461483] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.010 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.267 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:11.267 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:12.202 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.136 [2024-12-06 19:24:23.511252] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:13.137 [2024-12-06 19:24:23.511293] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:13.137 [2024-12-06 19:24:23.511318] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:13.137 [2024-12-06 19:24:23.638705] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:13.137 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.137 [2024-12-06 19:24:23.700330] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:13.137 [2024-12-06 19:24:23.701094] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x216de20:1 started. 00:26:13.137 [2024-12-06 19:24:23.702471] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:13.137 [2024-12-06 19:24:23.702519] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:13.137 [2024-12-06 19:24:23.702552] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:13.137 [2024-12-06 19:24:23.702576] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:13.137 [2024-12-06 19:24:23.702589] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:13.137 [2024-12-06 19:24:23.709747] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x216de20 was disconnected and freed. delete nvme_qpair. 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1205128 ']' 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205128' 00:26:14.513 killing process with pid 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1205128 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.513 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.513 rmmod nvme_tcp 00:26:14.513 rmmod nvme_fabrics 00:26:14.513 rmmod nvme_keyring 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1205109 ']' 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1205109 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1205109 ']' 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1205109 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205109 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205109' 00:26:14.513 killing process with pid 1205109 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1205109 00:26:14.513 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1205109 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.773 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.309 00:26:17.309 real 0m17.831s 00:26:17.309 user 0m25.798s 00:26:17.309 sys 0m3.091s 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 ************************************ 00:26:17.309 END TEST nvmf_discovery_remove_ifc 00:26:17.309 ************************************ 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 ************************************ 00:26:17.309 START TEST nvmf_identify_kernel_target 00:26:17.309 ************************************ 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:17.309 * Looking for test storage... 00:26:17.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:17.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.309 --rc genhtml_branch_coverage=1 00:26:17.309 --rc genhtml_function_coverage=1 00:26:17.309 --rc genhtml_legend=1 00:26:17.309 --rc geninfo_all_blocks=1 00:26:17.309 --rc geninfo_unexecuted_blocks=1 00:26:17.309 00:26:17.309 ' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:17.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.309 --rc genhtml_branch_coverage=1 00:26:17.309 --rc genhtml_function_coverage=1 00:26:17.309 --rc genhtml_legend=1 00:26:17.309 --rc geninfo_all_blocks=1 00:26:17.309 --rc geninfo_unexecuted_blocks=1 00:26:17.309 00:26:17.309 ' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:17.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.309 --rc genhtml_branch_coverage=1 00:26:17.309 --rc genhtml_function_coverage=1 00:26:17.309 --rc genhtml_legend=1 00:26:17.309 --rc geninfo_all_blocks=1 00:26:17.309 --rc geninfo_unexecuted_blocks=1 00:26:17.309 00:26:17.309 ' 00:26:17.309 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:17.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.309 --rc genhtml_branch_coverage=1 00:26:17.310 --rc genhtml_function_coverage=1 00:26:17.310 --rc genhtml_legend=1 00:26:17.310 --rc geninfo_all_blocks=1 00:26:17.310 --rc geninfo_unexecuted_blocks=1 00:26:17.310 00:26:17.310 ' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.310 19:24:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.219 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:19.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:19.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:19.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:19.220 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:19.220 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:19.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:26:19.479 00:26:19.479 --- 10.0.0.2 ping statistics --- 00:26:19.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.479 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:26:19.479 00:26:19.479 --- 10.0.0.1 ping statistics --- 00:26:19.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.479 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:19.479 19:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:20.860 Waiting for block devices as requested 00:26:20.860 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:20.860 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:20.860 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:21.121 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:21.121 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:21.121 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:21.121 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:21.121 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:21.380 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:21.380 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:21.380 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:21.380 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:21.640 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:21.640 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:21.640 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:21.640 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:21.900 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:21.900 No valid GPT data, bailing 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.900 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:22.159 00:26:22.159 Discovery Log Number of Records 2, Generation counter 2 00:26:22.159 =====Discovery Log Entry 0====== 00:26:22.159 trtype: tcp 00:26:22.159 adrfam: ipv4 00:26:22.159 subtype: current discovery subsystem 00:26:22.159 treq: not specified, sq flow control disable supported 00:26:22.159 portid: 1 00:26:22.159 trsvcid: 4420 00:26:22.159 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:22.159 traddr: 10.0.0.1 00:26:22.159 eflags: none 00:26:22.159 sectype: none 00:26:22.159 =====Discovery Log Entry 1====== 00:26:22.159 trtype: tcp 00:26:22.159 adrfam: ipv4 00:26:22.159 subtype: nvme subsystem 00:26:22.159 treq: not specified, sq flow control disable supported 00:26:22.159 portid: 1 00:26:22.159 trsvcid: 4420 00:26:22.159 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:22.159 traddr: 10.0.0.1 00:26:22.159 eflags: none 00:26:22.159 sectype: none 00:26:22.159 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:22.159 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:22.159 ===================================================== 00:26:22.159 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:22.159 ===================================================== 00:26:22.159 Controller Capabilities/Features 00:26:22.159 ================================ 00:26:22.159 Vendor ID: 0000 00:26:22.159 Subsystem Vendor ID: 0000 00:26:22.159 Serial Number: 19937e143d7dc14e279c 00:26:22.159 Model Number: Linux 00:26:22.159 Firmware Version: 6.8.9-20 00:26:22.159 Recommended Arb Burst: 0 00:26:22.159 IEEE OUI Identifier: 00 00 00 00:26:22.159 Multi-path I/O 00:26:22.159 May have multiple subsystem ports: No 00:26:22.159 May have multiple controllers: No 00:26:22.159 Associated with SR-IOV VF: No 00:26:22.159 Max Data Transfer Size: Unlimited 00:26:22.159 Max Number of Namespaces: 0 00:26:22.159 Max Number of I/O Queues: 1024 00:26:22.159 NVMe Specification Version (VS): 1.3 00:26:22.159 NVMe Specification Version (Identify): 1.3 00:26:22.159 Maximum Queue Entries: 1024 00:26:22.159 Contiguous Queues Required: No 00:26:22.159 Arbitration Mechanisms Supported 00:26:22.159 Weighted Round Robin: Not Supported 00:26:22.159 Vendor Specific: Not Supported 00:26:22.159 Reset Timeout: 7500 ms 00:26:22.159 Doorbell Stride: 4 bytes 00:26:22.159 NVM Subsystem Reset: Not Supported 00:26:22.159 Command Sets Supported 00:26:22.159 NVM Command Set: Supported 00:26:22.159 Boot Partition: Not Supported 00:26:22.159 Memory Page Size Minimum: 4096 bytes 00:26:22.159 Memory Page Size Maximum: 4096 bytes 00:26:22.159 Persistent Memory Region: Not Supported 00:26:22.159 Optional Asynchronous Events Supported 00:26:22.159 Namespace Attribute Notices: Not Supported 00:26:22.159 Firmware Activation Notices: Not Supported 00:26:22.159 ANA Change Notices: Not Supported 00:26:22.159 PLE Aggregate Log Change Notices: Not Supported 00:26:22.159 LBA Status Info Alert Notices: Not Supported 00:26:22.159 EGE Aggregate Log Change Notices: Not Supported 00:26:22.159 Normal NVM Subsystem Shutdown event: Not Supported 00:26:22.159 Zone Descriptor Change Notices: Not Supported 00:26:22.159 Discovery Log Change Notices: Supported 00:26:22.159 Controller Attributes 00:26:22.159 128-bit Host Identifier: Not Supported 00:26:22.159 Non-Operational Permissive Mode: Not Supported 00:26:22.159 NVM Sets: Not Supported 00:26:22.159 Read Recovery Levels: Not Supported 00:26:22.159 Endurance Groups: Not Supported 00:26:22.159 Predictable Latency Mode: Not Supported 00:26:22.159 Traffic Based Keep ALive: Not Supported 00:26:22.159 Namespace Granularity: Not Supported 00:26:22.159 SQ Associations: Not Supported 00:26:22.159 UUID List: Not Supported 00:26:22.159 Multi-Domain Subsystem: Not Supported 00:26:22.159 Fixed Capacity Management: Not Supported 00:26:22.159 Variable Capacity Management: Not Supported 00:26:22.159 Delete Endurance Group: Not Supported 00:26:22.159 Delete NVM Set: Not Supported 00:26:22.159 Extended LBA Formats Supported: Not Supported 00:26:22.159 Flexible Data Placement Supported: Not Supported 00:26:22.159 00:26:22.159 Controller Memory Buffer Support 00:26:22.159 ================================ 00:26:22.159 Supported: No 00:26:22.159 00:26:22.159 Persistent Memory Region Support 00:26:22.159 ================================ 00:26:22.159 Supported: No 00:26:22.159 00:26:22.159 Admin Command Set Attributes 00:26:22.159 ============================ 00:26:22.159 Security Send/Receive: Not Supported 00:26:22.159 Format NVM: Not Supported 00:26:22.160 Firmware Activate/Download: Not Supported 00:26:22.160 Namespace Management: Not Supported 00:26:22.160 Device Self-Test: Not Supported 00:26:22.160 Directives: Not Supported 00:26:22.160 NVMe-MI: Not Supported 00:26:22.160 Virtualization Management: Not Supported 00:26:22.160 Doorbell Buffer Config: Not Supported 00:26:22.160 Get LBA Status Capability: Not Supported 00:26:22.160 Command & Feature Lockdown Capability: Not Supported 00:26:22.160 Abort Command Limit: 1 00:26:22.160 Async Event Request Limit: 1 00:26:22.160 Number of Firmware Slots: N/A 00:26:22.160 Firmware Slot 1 Read-Only: N/A 00:26:22.160 Firmware Activation Without Reset: N/A 00:26:22.160 Multiple Update Detection Support: N/A 00:26:22.160 Firmware Update Granularity: No Information Provided 00:26:22.160 Per-Namespace SMART Log: No 00:26:22.160 Asymmetric Namespace Access Log Page: Not Supported 00:26:22.160 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:22.160 Command Effects Log Page: Not Supported 00:26:22.160 Get Log Page Extended Data: Supported 00:26:22.160 Telemetry Log Pages: Not Supported 00:26:22.160 Persistent Event Log Pages: Not Supported 00:26:22.160 Supported Log Pages Log Page: May Support 00:26:22.160 Commands Supported & Effects Log Page: Not Supported 00:26:22.160 Feature Identifiers & Effects Log Page:May Support 00:26:22.160 NVMe-MI Commands & Effects Log Page: May Support 00:26:22.160 Data Area 4 for Telemetry Log: Not Supported 00:26:22.160 Error Log Page Entries Supported: 1 00:26:22.160 Keep Alive: Not Supported 00:26:22.160 00:26:22.160 NVM Command Set Attributes 00:26:22.160 ========================== 00:26:22.160 Submission Queue Entry Size 00:26:22.160 Max: 1 00:26:22.160 Min: 1 00:26:22.160 Completion Queue Entry Size 00:26:22.160 Max: 1 00:26:22.160 Min: 1 00:26:22.160 Number of Namespaces: 0 00:26:22.160 Compare Command: Not Supported 00:26:22.160 Write Uncorrectable Command: Not Supported 00:26:22.160 Dataset Management Command: Not Supported 00:26:22.160 Write Zeroes Command: Not Supported 00:26:22.160 Set Features Save Field: Not Supported 00:26:22.160 Reservations: Not Supported 00:26:22.160 Timestamp: Not Supported 00:26:22.160 Copy: Not Supported 00:26:22.160 Volatile Write Cache: Not Present 00:26:22.160 Atomic Write Unit (Normal): 1 00:26:22.160 Atomic Write Unit (PFail): 1 00:26:22.160 Atomic Compare & Write Unit: 1 00:26:22.160 Fused Compare & Write: Not Supported 00:26:22.160 Scatter-Gather List 00:26:22.160 SGL Command Set: Supported 00:26:22.160 SGL Keyed: Not Supported 00:26:22.160 SGL Bit Bucket Descriptor: Not Supported 00:26:22.160 SGL Metadata Pointer: Not Supported 00:26:22.160 Oversized SGL: Not Supported 00:26:22.160 SGL Metadata Address: Not Supported 00:26:22.160 SGL Offset: Supported 00:26:22.160 Transport SGL Data Block: Not Supported 00:26:22.160 Replay Protected Memory Block: Not Supported 00:26:22.160 00:26:22.160 Firmware Slot Information 00:26:22.160 ========================= 00:26:22.160 Active slot: 0 00:26:22.160 00:26:22.160 00:26:22.160 Error Log 00:26:22.160 ========= 00:26:22.160 00:26:22.160 Active Namespaces 00:26:22.160 ================= 00:26:22.160 Discovery Log Page 00:26:22.160 ================== 00:26:22.160 Generation Counter: 2 00:26:22.160 Number of Records: 2 00:26:22.160 Record Format: 0 00:26:22.160 00:26:22.160 Discovery Log Entry 0 00:26:22.160 ---------------------- 00:26:22.160 Transport Type: 3 (TCP) 00:26:22.160 Address Family: 1 (IPv4) 00:26:22.160 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:22.160 Entry Flags: 00:26:22.160 Duplicate Returned Information: 0 00:26:22.160 Explicit Persistent Connection Support for Discovery: 0 00:26:22.160 Transport Requirements: 00:26:22.160 Secure Channel: Not Specified 00:26:22.160 Port ID: 1 (0x0001) 00:26:22.160 Controller ID: 65535 (0xffff) 00:26:22.160 Admin Max SQ Size: 32 00:26:22.160 Transport Service Identifier: 4420 00:26:22.160 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:22.160 Transport Address: 10.0.0.1 00:26:22.160 Discovery Log Entry 1 00:26:22.160 ---------------------- 00:26:22.160 Transport Type: 3 (TCP) 00:26:22.160 Address Family: 1 (IPv4) 00:26:22.160 Subsystem Type: 2 (NVM Subsystem) 00:26:22.160 Entry Flags: 00:26:22.160 Duplicate Returned Information: 0 00:26:22.160 Explicit Persistent Connection Support for Discovery: 0 00:26:22.160 Transport Requirements: 00:26:22.160 Secure Channel: Not Specified 00:26:22.160 Port ID: 1 (0x0001) 00:26:22.160 Controller ID: 65535 (0xffff) 00:26:22.160 Admin Max SQ Size: 32 00:26:22.160 Transport Service Identifier: 4420 00:26:22.160 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:22.160 Transport Address: 10.0.0.1 00:26:22.160 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:22.420 get_feature(0x01) failed 00:26:22.420 get_feature(0x02) failed 00:26:22.420 get_feature(0x04) failed 00:26:22.420 ===================================================== 00:26:22.420 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:22.420 ===================================================== 00:26:22.420 Controller Capabilities/Features 00:26:22.420 ================================ 00:26:22.420 Vendor ID: 0000 00:26:22.420 Subsystem Vendor ID: 0000 00:26:22.420 Serial Number: 594c69083de823f8bda7 00:26:22.420 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:22.420 Firmware Version: 6.8.9-20 00:26:22.420 Recommended Arb Burst: 6 00:26:22.420 IEEE OUI Identifier: 00 00 00 00:26:22.420 Multi-path I/O 00:26:22.420 May have multiple subsystem ports: Yes 00:26:22.420 May have multiple controllers: Yes 00:26:22.420 Associated with SR-IOV VF: No 00:26:22.420 Max Data Transfer Size: Unlimited 00:26:22.420 Max Number of Namespaces: 1024 00:26:22.420 Max Number of I/O Queues: 128 00:26:22.420 NVMe Specification Version (VS): 1.3 00:26:22.420 NVMe Specification Version (Identify): 1.3 00:26:22.420 Maximum Queue Entries: 1024 00:26:22.420 Contiguous Queues Required: No 00:26:22.420 Arbitration Mechanisms Supported 00:26:22.420 Weighted Round Robin: Not Supported 00:26:22.420 Vendor Specific: Not Supported 00:26:22.420 Reset Timeout: 7500 ms 00:26:22.420 Doorbell Stride: 4 bytes 00:26:22.420 NVM Subsystem Reset: Not Supported 00:26:22.420 Command Sets Supported 00:26:22.420 NVM Command Set: Supported 00:26:22.420 Boot Partition: Not Supported 00:26:22.420 Memory Page Size Minimum: 4096 bytes 00:26:22.420 Memory Page Size Maximum: 4096 bytes 00:26:22.420 Persistent Memory Region: Not Supported 00:26:22.420 Optional Asynchronous Events Supported 00:26:22.420 Namespace Attribute Notices: Supported 00:26:22.420 Firmware Activation Notices: Not Supported 00:26:22.420 ANA Change Notices: Supported 00:26:22.420 PLE Aggregate Log Change Notices: Not Supported 00:26:22.420 LBA Status Info Alert Notices: Not Supported 00:26:22.420 EGE Aggregate Log Change Notices: Not Supported 00:26:22.420 Normal NVM Subsystem Shutdown event: Not Supported 00:26:22.420 Zone Descriptor Change Notices: Not Supported 00:26:22.420 Discovery Log Change Notices: Not Supported 00:26:22.420 Controller Attributes 00:26:22.420 128-bit Host Identifier: Supported 00:26:22.420 Non-Operational Permissive Mode: Not Supported 00:26:22.420 NVM Sets: Not Supported 00:26:22.420 Read Recovery Levels: Not Supported 00:26:22.420 Endurance Groups: Not Supported 00:26:22.420 Predictable Latency Mode: Not Supported 00:26:22.420 Traffic Based Keep ALive: Supported 00:26:22.420 Namespace Granularity: Not Supported 00:26:22.420 SQ Associations: Not Supported 00:26:22.420 UUID List: Not Supported 00:26:22.420 Multi-Domain Subsystem: Not Supported 00:26:22.420 Fixed Capacity Management: Not Supported 00:26:22.420 Variable Capacity Management: Not Supported 00:26:22.420 Delete Endurance Group: Not Supported 00:26:22.420 Delete NVM Set: Not Supported 00:26:22.420 Extended LBA Formats Supported: Not Supported 00:26:22.420 Flexible Data Placement Supported: Not Supported 00:26:22.420 00:26:22.420 Controller Memory Buffer Support 00:26:22.420 ================================ 00:26:22.420 Supported: No 00:26:22.420 00:26:22.420 Persistent Memory Region Support 00:26:22.420 ================================ 00:26:22.420 Supported: No 00:26:22.420 00:26:22.420 Admin Command Set Attributes 00:26:22.420 ============================ 00:26:22.420 Security Send/Receive: Not Supported 00:26:22.420 Format NVM: Not Supported 00:26:22.421 Firmware Activate/Download: Not Supported 00:26:22.421 Namespace Management: Not Supported 00:26:22.421 Device Self-Test: Not Supported 00:26:22.421 Directives: Not Supported 00:26:22.421 NVMe-MI: Not Supported 00:26:22.421 Virtualization Management: Not Supported 00:26:22.421 Doorbell Buffer Config: Not Supported 00:26:22.421 Get LBA Status Capability: Not Supported 00:26:22.421 Command & Feature Lockdown Capability: Not Supported 00:26:22.421 Abort Command Limit: 4 00:26:22.421 Async Event Request Limit: 4 00:26:22.421 Number of Firmware Slots: N/A 00:26:22.421 Firmware Slot 1 Read-Only: N/A 00:26:22.421 Firmware Activation Without Reset: N/A 00:26:22.421 Multiple Update Detection Support: N/A 00:26:22.421 Firmware Update Granularity: No Information Provided 00:26:22.421 Per-Namespace SMART Log: Yes 00:26:22.421 Asymmetric Namespace Access Log Page: Supported 00:26:22.421 ANA Transition Time : 10 sec 00:26:22.421 00:26:22.421 Asymmetric Namespace Access Capabilities 00:26:22.421 ANA Optimized State : Supported 00:26:22.421 ANA Non-Optimized State : Supported 00:26:22.421 ANA Inaccessible State : Supported 00:26:22.421 ANA Persistent Loss State : Supported 00:26:22.421 ANA Change State : Supported 00:26:22.421 ANAGRPID is not changed : No 00:26:22.421 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:22.421 00:26:22.421 ANA Group Identifier Maximum : 128 00:26:22.421 Number of ANA Group Identifiers : 128 00:26:22.421 Max Number of Allowed Namespaces : 1024 00:26:22.421 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:22.421 Command Effects Log Page: Supported 00:26:22.421 Get Log Page Extended Data: Supported 00:26:22.421 Telemetry Log Pages: Not Supported 00:26:22.421 Persistent Event Log Pages: Not Supported 00:26:22.421 Supported Log Pages Log Page: May Support 00:26:22.421 Commands Supported & Effects Log Page: Not Supported 00:26:22.421 Feature Identifiers & Effects Log Page:May Support 00:26:22.421 NVMe-MI Commands & Effects Log Page: May Support 00:26:22.421 Data Area 4 for Telemetry Log: Not Supported 00:26:22.421 Error Log Page Entries Supported: 128 00:26:22.421 Keep Alive: Supported 00:26:22.421 Keep Alive Granularity: 1000 ms 00:26:22.421 00:26:22.421 NVM Command Set Attributes 00:26:22.421 ========================== 00:26:22.421 Submission Queue Entry Size 00:26:22.421 Max: 64 00:26:22.421 Min: 64 00:26:22.421 Completion Queue Entry Size 00:26:22.421 Max: 16 00:26:22.421 Min: 16 00:26:22.421 Number of Namespaces: 1024 00:26:22.421 Compare Command: Not Supported 00:26:22.421 Write Uncorrectable Command: Not Supported 00:26:22.421 Dataset Management Command: Supported 00:26:22.421 Write Zeroes Command: Supported 00:26:22.421 Set Features Save Field: Not Supported 00:26:22.421 Reservations: Not Supported 00:26:22.421 Timestamp: Not Supported 00:26:22.421 Copy: Not Supported 00:26:22.421 Volatile Write Cache: Present 00:26:22.421 Atomic Write Unit (Normal): 1 00:26:22.421 Atomic Write Unit (PFail): 1 00:26:22.421 Atomic Compare & Write Unit: 1 00:26:22.421 Fused Compare & Write: Not Supported 00:26:22.421 Scatter-Gather List 00:26:22.421 SGL Command Set: Supported 00:26:22.421 SGL Keyed: Not Supported 00:26:22.421 SGL Bit Bucket Descriptor: Not Supported 00:26:22.421 SGL Metadata Pointer: Not Supported 00:26:22.421 Oversized SGL: Not Supported 00:26:22.421 SGL Metadata Address: Not Supported 00:26:22.421 SGL Offset: Supported 00:26:22.421 Transport SGL Data Block: Not Supported 00:26:22.421 Replay Protected Memory Block: Not Supported 00:26:22.421 00:26:22.421 Firmware Slot Information 00:26:22.421 ========================= 00:26:22.421 Active slot: 0 00:26:22.421 00:26:22.421 Asymmetric Namespace Access 00:26:22.421 =========================== 00:26:22.421 Change Count : 0 00:26:22.421 Number of ANA Group Descriptors : 1 00:26:22.421 ANA Group Descriptor : 0 00:26:22.421 ANA Group ID : 1 00:26:22.421 Number of NSID Values : 1 00:26:22.421 Change Count : 0 00:26:22.421 ANA State : 1 00:26:22.421 Namespace Identifier : 1 00:26:22.421 00:26:22.421 Commands Supported and Effects 00:26:22.421 ============================== 00:26:22.421 Admin Commands 00:26:22.421 -------------- 00:26:22.421 Get Log Page (02h): Supported 00:26:22.421 Identify (06h): Supported 00:26:22.421 Abort (08h): Supported 00:26:22.421 Set Features (09h): Supported 00:26:22.421 Get Features (0Ah): Supported 00:26:22.421 Asynchronous Event Request (0Ch): Supported 00:26:22.421 Keep Alive (18h): Supported 00:26:22.421 I/O Commands 00:26:22.421 ------------ 00:26:22.421 Flush (00h): Supported 00:26:22.421 Write (01h): Supported LBA-Change 00:26:22.421 Read (02h): Supported 00:26:22.421 Write Zeroes (08h): Supported LBA-Change 00:26:22.421 Dataset Management (09h): Supported 00:26:22.421 00:26:22.421 Error Log 00:26:22.421 ========= 00:26:22.421 Entry: 0 00:26:22.421 Error Count: 0x3 00:26:22.421 Submission Queue Id: 0x0 00:26:22.421 Command Id: 0x5 00:26:22.421 Phase Bit: 0 00:26:22.421 Status Code: 0x2 00:26:22.421 Status Code Type: 0x0 00:26:22.421 Do Not Retry: 1 00:26:22.421 Error Location: 0x28 00:26:22.421 LBA: 0x0 00:26:22.421 Namespace: 0x0 00:26:22.421 Vendor Log Page: 0x0 00:26:22.421 ----------- 00:26:22.421 Entry: 1 00:26:22.421 Error Count: 0x2 00:26:22.421 Submission Queue Id: 0x0 00:26:22.421 Command Id: 0x5 00:26:22.421 Phase Bit: 0 00:26:22.421 Status Code: 0x2 00:26:22.421 Status Code Type: 0x0 00:26:22.421 Do Not Retry: 1 00:26:22.421 Error Location: 0x28 00:26:22.421 LBA: 0x0 00:26:22.421 Namespace: 0x0 00:26:22.421 Vendor Log Page: 0x0 00:26:22.421 ----------- 00:26:22.421 Entry: 2 00:26:22.421 Error Count: 0x1 00:26:22.421 Submission Queue Id: 0x0 00:26:22.421 Command Id: 0x4 00:26:22.421 Phase Bit: 0 00:26:22.421 Status Code: 0x2 00:26:22.421 Status Code Type: 0x0 00:26:22.421 Do Not Retry: 1 00:26:22.421 Error Location: 0x28 00:26:22.421 LBA: 0x0 00:26:22.421 Namespace: 0x0 00:26:22.421 Vendor Log Page: 0x0 00:26:22.421 00:26:22.421 Number of Queues 00:26:22.421 ================ 00:26:22.421 Number of I/O Submission Queues: 128 00:26:22.421 Number of I/O Completion Queues: 128 00:26:22.421 00:26:22.421 ZNS Specific Controller Data 00:26:22.421 ============================ 00:26:22.421 Zone Append Size Limit: 0 00:26:22.421 00:26:22.421 00:26:22.421 Active Namespaces 00:26:22.421 ================= 00:26:22.421 get_feature(0x05) failed 00:26:22.421 Namespace ID:1 00:26:22.421 Command Set Identifier: NVM (00h) 00:26:22.421 Deallocate: Supported 00:26:22.421 Deallocated/Unwritten Error: Not Supported 00:26:22.421 Deallocated Read Value: Unknown 00:26:22.421 Deallocate in Write Zeroes: Not Supported 00:26:22.421 Deallocated Guard Field: 0xFFFF 00:26:22.421 Flush: Supported 00:26:22.421 Reservation: Not Supported 00:26:22.421 Namespace Sharing Capabilities: Multiple Controllers 00:26:22.421 Size (in LBAs): 1953525168 (931GiB) 00:26:22.421 Capacity (in LBAs): 1953525168 (931GiB) 00:26:22.421 Utilization (in LBAs): 1953525168 (931GiB) 00:26:22.421 UUID: 8b161bce-0b55-4857-9cd8-d73fdd45ce26 00:26:22.421 Thin Provisioning: Not Supported 00:26:22.421 Per-NS Atomic Units: Yes 00:26:22.421 Atomic Boundary Size (Normal): 0 00:26:22.421 Atomic Boundary Size (PFail): 0 00:26:22.421 Atomic Boundary Offset: 0 00:26:22.421 NGUID/EUI64 Never Reused: No 00:26:22.421 ANA group ID: 1 00:26:22.421 Namespace Write Protected: No 00:26:22.421 Number of LBA Formats: 1 00:26:22.421 Current LBA Format: LBA Format #00 00:26:22.421 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:22.421 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.421 rmmod nvme_tcp 00:26:22.421 rmmod nvme_fabrics 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.421 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.422 19:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.329 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:24.329 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:24.329 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:24.329 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:24.588 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:25.525 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:25.525 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:25.525 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:25.525 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:25.525 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:25.525 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:25.786 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:25.786 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:25.786 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:26.726 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:26.726 00:26:26.726 real 0m9.809s 00:26:26.726 user 0m2.192s 00:26:26.726 sys 0m3.620s 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.726 ************************************ 00:26:26.726 END TEST nvmf_identify_kernel_target 00:26:26.726 ************************************ 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.726 ************************************ 00:26:26.726 START TEST nvmf_auth_host 00:26:26.726 ************************************ 00:26:26.726 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:26.985 * Looking for test storage... 00:26:26.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:26.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.985 --rc genhtml_branch_coverage=1 00:26:26.985 --rc genhtml_function_coverage=1 00:26:26.985 --rc genhtml_legend=1 00:26:26.985 --rc geninfo_all_blocks=1 00:26:26.985 --rc geninfo_unexecuted_blocks=1 00:26:26.985 00:26:26.985 ' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:26.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.985 --rc genhtml_branch_coverage=1 00:26:26.985 --rc genhtml_function_coverage=1 00:26:26.985 --rc genhtml_legend=1 00:26:26.985 --rc geninfo_all_blocks=1 00:26:26.985 --rc geninfo_unexecuted_blocks=1 00:26:26.985 00:26:26.985 ' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:26.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.985 --rc genhtml_branch_coverage=1 00:26:26.985 --rc genhtml_function_coverage=1 00:26:26.985 --rc genhtml_legend=1 00:26:26.985 --rc geninfo_all_blocks=1 00:26:26.985 --rc geninfo_unexecuted_blocks=1 00:26:26.985 00:26:26.985 ' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:26.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.985 --rc genhtml_branch_coverage=1 00:26:26.985 --rc genhtml_function_coverage=1 00:26:26.985 --rc genhtml_legend=1 00:26:26.985 --rc geninfo_all_blocks=1 00:26:26.985 --rc geninfo_unexecuted_blocks=1 00:26:26.985 00:26:26.985 ' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.985 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.986 19:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:29.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:29.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:29.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:29.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.519 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:26:29.520 00:26:29.520 --- 10.0.0.2 ping statistics --- 00:26:29.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.520 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:26:29.520 00:26:29.520 --- 10.0.0.1 ping statistics --- 00:26:29.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.520 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1212347 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1212347 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1212347 ']' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.520 19:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da9307fd77587a2ee56f7ecac6196b2b 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.J6o 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da9307fd77587a2ee56f7ecac6196b2b 0 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da9307fd77587a2ee56f7ecac6196b2b 0 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da9307fd77587a2ee56f7ecac6196b2b 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.J6o 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.J6o 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.J6o 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9b586f950f7290e61719d0f598dcd2581e4adb4b3df40b1bc474aef8509d7f1 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BzK 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9b586f950f7290e61719d0f598dcd2581e4adb4b3df40b1bc474aef8509d7f1 3 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9b586f950f7290e61719d0f598dcd2581e4adb4b3df40b1bc474aef8509d7f1 3 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9b586f950f7290e61719d0f598dcd2581e4adb4b3df40b1bc474aef8509d7f1 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:29.520 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BzK 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BzK 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.BzK 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90397f932c7492c65705bbe7c1a86c21bb783d6ec04adad6 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CP3 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90397f932c7492c65705bbe7c1a86c21bb783d6ec04adad6 0 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90397f932c7492c65705bbe7c1a86c21bb783d6ec04adad6 0 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90397f932c7492c65705bbe7c1a86c21bb783d6ec04adad6 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CP3 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CP3 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CP3 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d11162d2c156066fdd89a6a2f116ca1e292586fe95cb3049 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AvP 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d11162d2c156066fdd89a6a2f116ca1e292586fe95cb3049 2 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d11162d2c156066fdd89a6a2f116ca1e292586fe95cb3049 2 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d11162d2c156066fdd89a6a2f116ca1e292586fe95cb3049 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AvP 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AvP 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.AvP 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=efda422f3f7020800c9edcae6173e10e 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.po2 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key efda422f3f7020800c9edcae6173e10e 1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 efda422f3f7020800c9edcae6173e10e 1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=efda422f3f7020800c9edcae6173e10e 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:29.778 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.po2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.po2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.po2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56c9663179692ba29c8c6eac647fdec6 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qkZ 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56c9663179692ba29c8c6eac647fdec6 1 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56c9663179692ba29c8c6eac647fdec6 1 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56c9663179692ba29c8c6eac647fdec6 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qkZ 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qkZ 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qkZ 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c91c4aae233b542ea4f7e816d5beb2b1dc198d6042014d4a 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EMY 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c91c4aae233b542ea4f7e816d5beb2b1dc198d6042014d4a 2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c91c4aae233b542ea4f7e816d5beb2b1dc198d6042014d4a 2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c91c4aae233b542ea4f7e816d5beb2b1dc198d6042014d4a 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:29.779 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EMY 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EMY 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EMY 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5221c1d9442fbe2fe78462ba9746890a 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rEa 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5221c1d9442fbe2fe78462ba9746890a 0 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5221c1d9442fbe2fe78462ba9746890a 0 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5221c1d9442fbe2fe78462ba9746890a 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rEa 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rEa 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rEa 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4108aa24f7bcd7fa5ec8e69ed85134c6a1f942ec94d26d017eb4ca0a850480c3 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3nG 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4108aa24f7bcd7fa5ec8e69ed85134c6a1f942ec94d26d017eb4ca0a850480c3 3 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4108aa24f7bcd7fa5ec8e69ed85134c6a1f942ec94d26d017eb4ca0a850480c3 3 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4108aa24f7bcd7fa5ec8e69ed85134c6a1f942ec94d26d017eb4ca0a850480c3 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3nG 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3nG 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3nG 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1212347 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1212347 ']' 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.036 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J6o 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.BzK ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BzK 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CP3 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.AvP ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AvP 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.po2 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qkZ ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qkZ 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EMY 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rEa ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rEa 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3nG 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:30.294 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:30.551 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:30.551 19:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:31.483 Waiting for block devices as requested 00:26:31.483 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:31.740 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:31.740 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:31.999 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:31.999 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:31.999 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:31.999 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:32.257 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.257 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:32.257 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:32.257 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:32.515 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:32.515 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:32.515 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:32.515 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:32.773 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.773 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:33.344 No valid GPT data, bailing 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:33.344 00:26:33.344 Discovery Log Number of Records 2, Generation counter 2 00:26:33.344 =====Discovery Log Entry 0====== 00:26:33.344 trtype: tcp 00:26:33.344 adrfam: ipv4 00:26:33.344 subtype: current discovery subsystem 00:26:33.344 treq: not specified, sq flow control disable supported 00:26:33.344 portid: 1 00:26:33.344 trsvcid: 4420 00:26:33.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:33.344 traddr: 10.0.0.1 00:26:33.344 eflags: none 00:26:33.344 sectype: none 00:26:33.344 =====Discovery Log Entry 1====== 00:26:33.344 trtype: tcp 00:26:33.344 adrfam: ipv4 00:26:33.344 subtype: nvme subsystem 00:26:33.344 treq: not specified, sq flow control disable supported 00:26:33.344 portid: 1 00:26:33.344 trsvcid: 4420 00:26:33.344 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:33.344 traddr: 10.0.0.1 00:26:33.344 eflags: none 00:26:33.344 sectype: none 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:33.344 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.345 19:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 nvme0n1 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:33.603 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.604 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.862 nvme0n1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.862 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.121 nvme0n1 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.121 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.122 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.380 nvme0n1 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:34.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.381 nvme0n1 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.381 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.639 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.640 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.640 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.640 nvme0n1 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.640 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.914 nvme0n1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.914 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.171 nvme0n1 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.171 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.172 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.429 nvme0n1 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.429 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.430 19:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.687 nvme0n1 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.687 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.688 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.688 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.944 nvme0n1 00:26:35.944 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.944 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.202 nvme0n1 00:26:36.202 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.202 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.202 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.202 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.203 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.460 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.461 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.718 nvme0n1 00:26:36.718 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.719 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.977 nvme0n1 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.977 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.978 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.236 nvme0n1 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.236 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.803 nvme0n1 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.803 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.804 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.370 nvme0n1 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:38.370 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.371 19:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.629 nvme0n1 00:26:38.629 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.629 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.629 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.629 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.629 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.887 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.888 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.454 nvme0n1 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.454 19:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.019 nvme0n1 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.019 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.582 nvme0n1 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:40.582 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.583 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.516 nvme0n1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.516 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.449 nvme0n1 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.449 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.450 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.090 nvme0n1 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.090 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.375 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.309 nvme0n1 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.309 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.242 nvme0n1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 nvme0n1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.243 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 nvme0n1 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.759 nvme0n1 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.759 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.017 nvme0n1 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.017 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.018 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.275 nvme0n1 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:46.275 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.533 nvme0n1 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.533 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.534 19:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 nvme0n1 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.791 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.049 nvme0n1 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.049 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.050 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.308 nvme0n1 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.308 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.566 nvme0n1 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.566 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.824 nvme0n1 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.824 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.825 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 nvme0n1 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.083 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.340 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.340 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.340 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.340 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.597 nvme0n1 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.598 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 nvme0n1 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.114 nvme0n1 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.114 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.115 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.680 nvme0n1 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.680 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.681 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.246 nvme0n1 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.246 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.810 nvme0n1 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.376 nvme0n1 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.376 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.943 nvme0n1 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.943 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.897 nvme0n1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.897 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.831 nvme0n1 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:53.831 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.832 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.765 nvme0n1 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.695 nvme0n1 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.695 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.695 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.696 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.644 nvme0n1 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:56.644 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.645 nvme0n1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.645 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.902 nvme0n1 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.902 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.160 nvme0n1 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.160 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.417 nvme0n1 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.417 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.418 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.675 nvme0n1 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.675 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.676 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.933 nvme0n1 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.933 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.934 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 nvme0n1 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.192 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 nvme0n1 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.450 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.709 nvme0n1 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.709 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.967 nvme0n1 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:58.967 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.968 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.226 nvme0n1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.226 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.485 nvme0n1 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.485 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.485 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.052 nvme0n1 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.052 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.310 nvme0n1 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.311 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 nvme0n1 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.569 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.570 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.138 nvme0n1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.138 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.703 nvme0n1 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.703 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.269 nvme0n1 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.269 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.835 nvme0n1 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.836 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.401 nvme0n1 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE5MzA3ZmQ3NzU4N2EyZWU1NmY3ZWNhYzYxOTZiMmI2Y+CP: 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjliNTg2Zjk1MGY3MjkwZTYxNzE5ZDBmNTk4ZGNkMjU4MWU0YWRiNGIzZGY0MGIxYmM0NzRhZWY4NTA5ZDdmMahdXNA=: 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.402 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.335 nvme0n1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.335 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.266 nvme0n1 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.266 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.267 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.197 nvme0n1 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzkxYzRhYWUyMzNiNTQyZWE0ZjdlODE2ZDViZWIyYjFkYzE5OGQ2MDQyMDE0ZDRhFjzcng==: 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: ]] 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTIyMWMxZDk0NDJmYmUyZmU3ODQ2MmJhOTc0Njg5MGEGWvjF: 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.197 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.198 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.128 nvme0n1 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDEwOGFhMjRmN2JjZDdmYTVlYzhlNjllZDg1MTM0YzZhMWY5NDJlYzk0ZDI2ZDAxN2ViNGNhMGE4NTA0ODBjM4xfETI=: 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.128 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.062 nvme0n1 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.062 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 request: 00:27:08.063 { 00:27:08.063 "name": "nvme0", 00:27:08.063 "trtype": "tcp", 00:27:08.063 "traddr": "10.0.0.1", 00:27:08.063 "adrfam": "ipv4", 00:27:08.063 "trsvcid": "4420", 00:27:08.063 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:08.063 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:08.063 "prchk_reftag": false, 00:27:08.063 "prchk_guard": false, 00:27:08.063 "hdgst": false, 00:27:08.063 "ddgst": false, 00:27:08.063 "allow_unrecognized_csi": false, 00:27:08.063 "method": "bdev_nvme_attach_controller", 00:27:08.063 "req_id": 1 00:27:08.063 } 00:27:08.063 Got JSON-RPC error response 00:27:08.063 response: 00:27:08.063 { 00:27:08.063 "code": -5, 00:27:08.063 "message": "Input/output error" 00:27:08.063 } 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 request: 00:27:08.063 { 00:27:08.063 "name": "nvme0", 00:27:08.063 "trtype": "tcp", 00:27:08.063 "traddr": "10.0.0.1", 00:27:08.063 "adrfam": "ipv4", 00:27:08.063 "trsvcid": "4420", 00:27:08.063 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:08.063 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:08.063 "prchk_reftag": false, 00:27:08.063 "prchk_guard": false, 00:27:08.063 "hdgst": false, 00:27:08.063 "ddgst": false, 00:27:08.063 "dhchap_key": "key2", 00:27:08.063 "allow_unrecognized_csi": false, 00:27:08.063 "method": "bdev_nvme_attach_controller", 00:27:08.063 "req_id": 1 00:27:08.063 } 00:27:08.063 Got JSON-RPC error response 00:27:08.063 response: 00:27:08.063 { 00:27:08.063 "code": -5, 00:27:08.063 "message": "Input/output error" 00:27:08.063 } 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.322 request: 00:27:08.322 { 00:27:08.322 "name": "nvme0", 00:27:08.322 "trtype": "tcp", 00:27:08.322 "traddr": "10.0.0.1", 00:27:08.322 "adrfam": "ipv4", 00:27:08.322 "trsvcid": "4420", 00:27:08.322 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:08.322 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:08.322 "prchk_reftag": false, 00:27:08.322 "prchk_guard": false, 00:27:08.322 "hdgst": false, 00:27:08.322 "ddgst": false, 00:27:08.322 "dhchap_key": "key1", 00:27:08.322 "dhchap_ctrlr_key": "ckey2", 00:27:08.322 "allow_unrecognized_csi": false, 00:27:08.322 "method": "bdev_nvme_attach_controller", 00:27:08.322 "req_id": 1 00:27:08.322 } 00:27:08.322 Got JSON-RPC error response 00:27:08.322 response: 00:27:08.322 { 00:27:08.322 "code": -5, 00:27:08.322 "message": "Input/output error" 00:27:08.322 } 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.322 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.323 nvme0n1 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.323 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.583 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.583 request: 00:27:08.583 { 00:27:08.583 "name": "nvme0", 00:27:08.583 "dhchap_key": "key1", 00:27:08.583 "dhchap_ctrlr_key": "ckey2", 00:27:08.583 "method": "bdev_nvme_set_keys", 00:27:08.583 "req_id": 1 00:27:08.583 } 00:27:08.583 Got JSON-RPC error response 00:27:08.583 response: 00:27:08.583 { 00:27:08.583 "code": -13, 00:27:08.583 "message": "Permission denied" 00:27:08.583 } 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:08.583 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:09.958 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTAzOTdmOTMyYzc0OTJjNjU3MDViYmU3YzFhODZjMjFiYjc4M2Q2ZWMwNGFkYWQ2EV8UiA==: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDExMTYyZDJjMTU2MDY2ZmRkODlhNmEyZjExNmNhMWUyOTI1ODZmZTk1Y2IzMDQ5vKowjA==: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.893 nvme0n1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZkYTQyMmYzZjcwMjA4MDBjOWVkY2FlNjE3M2UxMGUF9r6D: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: ]] 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjOTY2MzE3OTY5MmJhMjljOGM2ZWFjNjQ3ZmRlYzbkuCfU: 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.893 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.894 request: 00:27:10.894 { 00:27:10.894 "name": "nvme0", 00:27:10.894 "dhchap_key": "key2", 00:27:10.894 "dhchap_ctrlr_key": "ckey1", 00:27:10.894 "method": "bdev_nvme_set_keys", 00:27:10.894 "req_id": 1 00:27:10.894 } 00:27:10.894 Got JSON-RPC error response 00:27:10.894 response: 00:27:10.894 { 00:27:10.894 "code": -13, 00:27:10.894 "message": "Permission denied" 00:27:10.894 } 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:10.894 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.270 rmmod nvme_tcp 00:27:12.270 rmmod nvme_fabrics 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1212347 ']' 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1212347 00:27:12.270 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1212347 ']' 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1212347 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1212347 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1212347' 00:27:12.271 killing process with pid 1212347 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1212347 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1212347 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.271 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:14.324 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.698 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:15.698 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:15.698 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:16.635 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:16.893 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.J6o /tmp/spdk.key-null.CP3 /tmp/spdk.key-sha256.po2 /tmp/spdk.key-sha384.EMY /tmp/spdk.key-sha512.3nG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:16.893 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:18.268 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:18.268 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:18.268 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:18.268 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:18.268 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:18.268 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:18.268 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:18.268 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:18.268 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:18.268 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:18.268 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:18.268 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:18.268 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:18.268 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:18.268 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:18.268 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:18.268 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:18.268 00:27:18.268 real 0m51.386s 00:27:18.268 user 0m49.155s 00:27:18.268 sys 0m6.298s 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 ************************************ 00:27:18.268 END TEST nvmf_auth_host 00:27:18.268 ************************************ 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 ************************************ 00:27:18.268 START TEST nvmf_digest 00:27:18.268 ************************************ 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:18.268 * Looking for test storage... 00:27:18.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:18.268 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.527 --rc genhtml_branch_coverage=1 00:27:18.527 --rc genhtml_function_coverage=1 00:27:18.527 --rc genhtml_legend=1 00:27:18.527 --rc geninfo_all_blocks=1 00:27:18.527 --rc geninfo_unexecuted_blocks=1 00:27:18.527 00:27:18.527 ' 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.527 --rc genhtml_branch_coverage=1 00:27:18.527 --rc genhtml_function_coverage=1 00:27:18.527 --rc genhtml_legend=1 00:27:18.527 --rc geninfo_all_blocks=1 00:27:18.527 --rc geninfo_unexecuted_blocks=1 00:27:18.527 00:27:18.527 ' 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.527 --rc genhtml_branch_coverage=1 00:27:18.527 --rc genhtml_function_coverage=1 00:27:18.527 --rc genhtml_legend=1 00:27:18.527 --rc geninfo_all_blocks=1 00:27:18.527 --rc geninfo_unexecuted_blocks=1 00:27:18.527 00:27:18.527 ' 00:27:18.527 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.527 --rc genhtml_branch_coverage=1 00:27:18.528 --rc genhtml_function_coverage=1 00:27:18.528 --rc genhtml_legend=1 00:27:18.528 --rc geninfo_all_blocks=1 00:27:18.528 --rc geninfo_unexecuted_blocks=1 00:27:18.528 00:27:18.528 ' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.528 19:25:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:21.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:21.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:21.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:21.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.063 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:27:21.064 00:27:21.064 --- 10.0.0.2 ping statistics --- 00:27:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.064 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:21.064 00:27:21.064 --- 10.0.0.1 ping statistics --- 00:27:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.064 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.064 ************************************ 00:27:21.064 START TEST nvmf_digest_clean 00:27:21.064 ************************************ 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1221961 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1221961 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1221961 ']' 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.064 [2024-12-06 19:25:31.405635] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:21.064 [2024-12-06 19:25:31.405772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.064 [2024-12-06 19:25:31.496615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.064 [2024-12-06 19:25:31.552971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.064 [2024-12-06 19:25:31.553044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.064 [2024-12-06 19:25:31.553059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.064 [2024-12-06 19:25:31.553070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.064 [2024-12-06 19:25:31.553079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.064 [2024-12-06 19:25:31.553616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.064 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.323 null0 00:27:21.323 [2024-12-06 19:25:31.764244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.323 [2024-12-06 19:25:31.788454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1222096 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1222096 /var/tmp/bperf.sock 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1222096 ']' 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.323 19:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.323 [2024-12-06 19:25:31.838184] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:21.323 [2024-12-06 19:25:31.838262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222096 ] 00:27:21.581 [2024-12-06 19:25:31.903825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.581 [2024-12-06 19:25:31.960556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.581 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.581 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:21.581 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:21.581 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:21.582 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:22.149 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.149 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.407 nvme0n1 00:27:22.407 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:22.407 19:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.407 Running I/O for 2 seconds... 00:27:24.712 18890.00 IOPS, 73.79 MiB/s [2024-12-06T18:25:35.289Z] 18972.00 IOPS, 74.11 MiB/s 00:27:24.712 Latency(us) 00:27:24.712 [2024-12-06T18:25:35.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.712 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:24.712 nvme0n1 : 2.01 18987.41 74.17 0.00 0.00 6734.09 3446.71 15728.64 00:27:24.713 [2024-12-06T18:25:35.290Z] =================================================================================================================== 00:27:24.713 [2024-12-06T18:25:35.290Z] Total : 18987.41 74.17 0.00 0.00 6734.09 3446.71 15728.64 00:27:24.713 { 00:27:24.713 "results": [ 00:27:24.713 { 00:27:24.713 "job": "nvme0n1", 00:27:24.713 "core_mask": "0x2", 00:27:24.713 "workload": "randread", 00:27:24.713 "status": "finished", 00:27:24.713 "queue_depth": 128, 00:27:24.713 "io_size": 4096, 00:27:24.713 "runtime": 2.005118, 00:27:24.713 "iops": 18987.41121470158, 00:27:24.713 "mibps": 74.16957505742805, 00:27:24.713 "io_failed": 0, 00:27:24.713 "io_timeout": 0, 00:27:24.713 "avg_latency_us": 6734.090111056634, 00:27:24.713 "min_latency_us": 3446.708148148148, 00:27:24.713 "max_latency_us": 15728.64 00:27:24.713 } 00:27:24.713 ], 00:27:24.713 "core_count": 1 00:27:24.713 } 00:27:24.713 19:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:24.713 19:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:24.713 19:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:24.713 19:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:24.713 | select(.opcode=="crc32c") 00:27:24.713 | "\(.module_name) \(.executed)"' 00:27:24.713 19:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1222096 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1222096 ']' 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1222096 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222096 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222096' 00:27:24.713 killing process with pid 1222096 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1222096 00:27:24.713 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.713 00:27:24.713 Latency(us) 00:27:24.713 [2024-12-06T18:25:35.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.713 [2024-12-06T18:25:35.290Z] =================================================================================================================== 00:27:24.713 [2024-12-06T18:25:35.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.713 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1222096 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1222516 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1222516 /var/tmp/bperf.sock 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1222516 ']' 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.970 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 [2024-12-06 19:25:35.522211] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:24.970 [2024-12-06 19:25:35.522295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222516 ] 00:27:24.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.970 Zero copy mechanism will not be used. 00:27:25.228 [2024-12-06 19:25:35.588816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.228 [2024-12-06 19:25:35.646879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.228 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.228 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:25.228 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:25.228 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.228 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:25.792 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.792 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.048 nvme0n1 00:27:26.048 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:26.305 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.305 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.305 Zero copy mechanism will not be used. 00:27:26.305 Running I/O for 2 seconds... 00:27:28.608 6152.00 IOPS, 769.00 MiB/s [2024-12-06T18:25:39.185Z] 5932.00 IOPS, 741.50 MiB/s 00:27:28.608 Latency(us) 00:27:28.608 [2024-12-06T18:25:39.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.608 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:28.608 nvme0n1 : 2.00 5929.77 741.22 0.00 0.00 2693.93 685.70 7670.14 00:27:28.608 [2024-12-06T18:25:39.185Z] =================================================================================================================== 00:27:28.608 [2024-12-06T18:25:39.185Z] Total : 5929.77 741.22 0.00 0.00 2693.93 685.70 7670.14 00:27:28.608 { 00:27:28.608 "results": [ 00:27:28.608 { 00:27:28.608 "job": "nvme0n1", 00:27:28.608 "core_mask": "0x2", 00:27:28.608 "workload": "randread", 00:27:28.608 "status": "finished", 00:27:28.608 "queue_depth": 16, 00:27:28.608 "io_size": 131072, 00:27:28.608 "runtime": 2.00345, 00:27:28.608 "iops": 5929.771144775263, 00:27:28.608 "mibps": 741.2213930969078, 00:27:28.608 "io_failed": 0, 00:27:28.608 "io_timeout": 0, 00:27:28.608 "avg_latency_us": 2693.9275540591093, 00:27:28.608 "min_latency_us": 685.7007407407407, 00:27:28.608 "max_latency_us": 7670.139259259259 00:27:28.608 } 00:27:28.608 ], 00:27:28.608 "core_count": 1 00:27:28.608 } 00:27:28.608 19:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:28.608 19:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:28.608 19:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:28.608 19:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:28.608 | select(.opcode=="crc32c") 00:27:28.608 | "\(.module_name) \(.executed)"' 00:27:28.608 19:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1222516 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1222516 ']' 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1222516 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222516 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222516' 00:27:28.608 killing process with pid 1222516 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1222516 00:27:28.608 Received shutdown signal, test time was about 2.000000 seconds 00:27:28.608 00:27:28.608 Latency(us) 00:27:28.608 [2024-12-06T18:25:39.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.608 [2024-12-06T18:25:39.185Z] =================================================================================================================== 00:27:28.608 [2024-12-06T18:25:39.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.608 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1222516 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1222926 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1222926 /var/tmp/bperf.sock 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1222926 ']' 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.867 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:28.867 [2024-12-06 19:25:39.317879] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:28.867 [2024-12-06 19:25:39.317969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222926 ] 00:27:28.867 [2024-12-06 19:25:39.383632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.867 [2024-12-06 19:25:39.442689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.125 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.125 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:29.125 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:29.125 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:29.125 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:29.383 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.383 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.949 nvme0n1 00:27:29.949 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:29.949 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.949 Running I/O for 2 seconds... 00:27:31.819 21485.00 IOPS, 83.93 MiB/s [2024-12-06T18:25:42.396Z] 20246.50 IOPS, 79.09 MiB/s 00:27:31.819 Latency(us) 00:27:31.819 [2024-12-06T18:25:42.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.819 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:31.819 nvme0n1 : 2.01 20242.51 79.07 0.00 0.00 6308.63 2633.58 12233.39 00:27:31.819 [2024-12-06T18:25:42.396Z] =================================================================================================================== 00:27:31.819 [2024-12-06T18:25:42.396Z] Total : 20242.51 79.07 0.00 0.00 6308.63 2633.58 12233.39 00:27:31.819 { 00:27:31.819 "results": [ 00:27:31.819 { 00:27:31.819 "job": "nvme0n1", 00:27:31.819 "core_mask": "0x2", 00:27:31.819 "workload": "randwrite", 00:27:31.819 "status": "finished", 00:27:31.819 "queue_depth": 128, 00:27:31.819 "io_size": 4096, 00:27:31.819 "runtime": 2.008298, 00:27:31.819 "iops": 20242.513810201475, 00:27:31.819 "mibps": 79.07231957109951, 00:27:31.819 "io_failed": 0, 00:27:31.819 "io_timeout": 0, 00:27:31.819 "avg_latency_us": 6308.63492098893, 00:27:31.819 "min_latency_us": 2633.5762962962963, 00:27:31.819 "max_latency_us": 12233.386666666667 00:27:31.819 } 00:27:31.819 ], 00:27:31.819 "core_count": 1 00:27:31.819 } 00:27:32.077 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:32.077 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:32.077 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:32.077 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:32.077 | select(.opcode=="crc32c") 00:27:32.077 | "\(.module_name) \(.executed)"' 00:27:32.077 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1222926 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1222926 ']' 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1222926 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222926 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222926' 00:27:32.335 killing process with pid 1222926 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1222926 00:27:32.335 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.335 00:27:32.335 Latency(us) 00:27:32.335 [2024-12-06T18:25:42.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.335 [2024-12-06T18:25:42.912Z] =================================================================================================================== 00:27:32.335 [2024-12-06T18:25:42.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.335 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1222926 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1223338 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1223338 /var/tmp/bperf.sock 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1223338 ']' 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.593 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:32.593 [2024-12-06 19:25:42.983138] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:32.593 [2024-12-06 19:25:42.983221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223338 ] 00:27:32.593 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:32.593 Zero copy mechanism will not be used. 00:27:32.593 [2024-12-06 19:25:43.052926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.593 [2024-12-06 19:25:43.114208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.852 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.852 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:32.852 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:32.852 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:32.852 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:33.110 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.110 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.678 nvme0n1 00:27:33.678 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:33.678 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:33.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:33.678 Zero copy mechanism will not be used. 00:27:33.678 Running I/O for 2 seconds... 00:27:35.543 5585.00 IOPS, 698.12 MiB/s [2024-12-06T18:25:46.120Z] 5838.00 IOPS, 729.75 MiB/s 00:27:35.543 Latency(us) 00:27:35.543 [2024-12-06T18:25:46.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.543 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:35.543 nvme0n1 : 2.00 5835.31 729.41 0.00 0.00 2734.59 2063.17 9854.67 00:27:35.543 [2024-12-06T18:25:46.120Z] =================================================================================================================== 00:27:35.543 [2024-12-06T18:25:46.120Z] Total : 5835.31 729.41 0.00 0.00 2734.59 2063.17 9854.67 00:27:35.543 { 00:27:35.543 "results": [ 00:27:35.543 { 00:27:35.543 "job": "nvme0n1", 00:27:35.543 "core_mask": "0x2", 00:27:35.543 "workload": "randwrite", 00:27:35.543 "status": "finished", 00:27:35.543 "queue_depth": 16, 00:27:35.543 "io_size": 131072, 00:27:35.543 "runtime": 2.004348, 00:27:35.543 "iops": 5835.31402730464, 00:27:35.543 "mibps": 729.41425341308, 00:27:35.543 "io_failed": 0, 00:27:35.543 "io_timeout": 0, 00:27:35.543 "avg_latency_us": 2734.5894837107967, 00:27:35.543 "min_latency_us": 2063.17037037037, 00:27:35.543 "max_latency_us": 9854.672592592593 00:27:35.543 } 00:27:35.543 ], 00:27:35.543 "core_count": 1 00:27:35.543 } 00:27:35.801 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:35.801 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:35.801 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:35.801 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:35.801 | select(.opcode=="crc32c") 00:27:35.801 | "\(.module_name) \(.executed)"' 00:27:35.801 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1223338 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1223338 ']' 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1223338 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223338 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223338' 00:27:36.060 killing process with pid 1223338 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1223338 00:27:36.060 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.060 00:27:36.060 Latency(us) 00:27:36.060 [2024-12-06T18:25:46.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.060 [2024-12-06T18:25:46.637Z] =================================================================================================================== 00:27:36.060 [2024-12-06T18:25:46.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.060 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1223338 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1221961 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1221961 ']' 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1221961 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221961 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221961' 00:27:36.319 killing process with pid 1221961 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1221961 00:27:36.319 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1221961 00:27:36.577 00:27:36.577 real 0m15.575s 00:27:36.577 user 0m31.476s 00:27:36.577 sys 0m4.132s 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.577 ************************************ 00:27:36.577 END TEST nvmf_digest_clean 00:27:36.577 ************************************ 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:36.577 ************************************ 00:27:36.577 START TEST nvmf_digest_error 00:27:36.577 ************************************ 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1223892 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1223892 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1223892 ']' 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.577 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.577 [2024-12-06 19:25:47.027325] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:36.577 [2024-12-06 19:25:47.027415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.577 [2024-12-06 19:25:47.096862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.836 [2024-12-06 19:25:47.154080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.836 [2024-12-06 19:25:47.154129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.836 [2024-12-06 19:25:47.154143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.836 [2024-12-06 19:25:47.154155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.836 [2024-12-06 19:25:47.154166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.836 [2024-12-06 19:25:47.154821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.836 [2024-12-06 19:25:47.279525] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.836 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.836 null0 00:27:36.836 [2024-12-06 19:25:47.387407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.836 [2024-12-06 19:25:47.411680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1223913 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1223913 /var/tmp/bperf.sock 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1223913 ']' 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.095 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.095 [2024-12-06 19:25:47.458746] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:37.095 [2024-12-06 19:25:47.458824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223913 ] 00:27:37.095 [2024-12-06 19:25:47.522669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.095 [2024-12-06 19:25:47.586402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.353 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.353 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:37.353 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.353 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.611 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:37.611 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.611 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.611 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.611 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.612 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.870 nvme0n1 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:38.127 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.127 Running I/O for 2 seconds... 00:27:38.127 [2024-12-06 19:25:48.622435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.622490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.622512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.128 [2024-12-06 19:25:48.633854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.633884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.633902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.128 [2024-12-06 19:25:48.649406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.649439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.649471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.128 [2024-12-06 19:25:48.664832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.128 [2024-12-06 19:25:48.679512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.679545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.679563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.128 [2024-12-06 19:25:48.695641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.128 [2024-12-06 19:25:48.695683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.128 [2024-12-06 19:25:48.695702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.386 [2024-12-06 19:25:48.708618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.386 [2024-12-06 19:25:48.708650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.386 [2024-12-06 19:25:48.708677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.386 [2024-12-06 19:25:48.719140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.386 [2024-12-06 19:25:48.719170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.386 [2024-12-06 19:25:48.719203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.386 [2024-12-06 19:25:48.734634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.386 [2024-12-06 19:25:48.734687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.386 [2024-12-06 19:25:48.734725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.386 [2024-12-06 19:25:48.749439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.386 [2024-12-06 19:25:48.749470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.386 [2024-12-06 19:25:48.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.765881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.765914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.765932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.783434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.783464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.783496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.794243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.794285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.794302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.809222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.809251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.809282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.823107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.823139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.823156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.836445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.836472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.836503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.850846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.850878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.850896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.862171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.862205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.862236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.877176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.877206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.877237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.891884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.891914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.891945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.906015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.906046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.906064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.917118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.917145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.917176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.932951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.932998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.933014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.947284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.947314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.947346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.387 [2024-12-06 19:25:48.959633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.387 [2024-12-06 19:25:48.959684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.387 [2024-12-06 19:25:48.959702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:48.974691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:48.974735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:48.974751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:48.989204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:48.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:48.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.003812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.003842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.019035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.019062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.019092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.032902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.032932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.032950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.043926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.058721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.058751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.058784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.073217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.073245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.073276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.085827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.085855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.085886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.099928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.099973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.099995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.112498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.112526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.112557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.124614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.124642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.140364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.140392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.140422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.156920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.156954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.156972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.169946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.169977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.169994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.184815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.184864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.196366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.196394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.196426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.646 [2024-12-06 19:25:49.211473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.646 [2024-12-06 19:25:49.211500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.646 [2024-12-06 19:25:49.211530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.904 [2024-12-06 19:25:49.227431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.227501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.243515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.243546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.243563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.257509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.257557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.269183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.269211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.269240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.284826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.284858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.284875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.299259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.299290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.299307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.311784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.311830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.311846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.323860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.323890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.323908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.339861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.339891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.339922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.353030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.353057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.353087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.365907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.365937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.377331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.377359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.377390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.391335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.391362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.391392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.408049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.408097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.408114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.421984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.422014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.422045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.436423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.436455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.449572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.449621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.462210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.462241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.462264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.905 [2024-12-06 19:25:49.473099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:38.905 [2024-12-06 19:25:49.473127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.905 [2024-12-06 19:25:49.473157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.487346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.487376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.487408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.499737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.499781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.499798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.513277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.513305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.513336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.528340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.528367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.542800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.542829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.554321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.554349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.554381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.567242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.567269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.567300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.579678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.579705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.579735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.593823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.593853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.593870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 18250.00 IOPS, 71.29 MiB/s [2024-12-06T18:25:49.741Z] [2024-12-06 19:25:49.606320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.606349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.620862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.620890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.620920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.632993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.633023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.633040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.646313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.646357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.646373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.658352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.658413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.673186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.673214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.673245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.689113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.689145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.689168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.703705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.703736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.703754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.715031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.715076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.715093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.164 [2024-12-06 19:25:49.730201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.164 [2024-12-06 19:25:49.730231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.164 [2024-12-06 19:25:49.730262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.741305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.741349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.757384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.757445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.771508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.771569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.785049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.785079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.785110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.802256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.802316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.815109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.815147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.815166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.827178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.827223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.827240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.423 [2024-12-06 19:25:49.839818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.423 [2024-12-06 19:25:49.839848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.423 [2024-12-06 19:25:49.839882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.854894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.854924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.854956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.867408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.867437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.867469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.880621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.880649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.880692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.897239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.897267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.897298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.913596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.913630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.913648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.928398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.928430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.928447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.944383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.944414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.944432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.955740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.955769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.955800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.971177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.971206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.971237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.983628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.983679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.983698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.424 [2024-12-06 19:25:49.997802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.424 [2024-12-06 19:25:49.997834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.424 [2024-12-06 19:25:49.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.011734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.011805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.011823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.025239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.025287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.025304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.040218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.040252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.040270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.051982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.052014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.052055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.066946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.066976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.067008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.080918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.080952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.093720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.093749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.093781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.110204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.110253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.122071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.122101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.137244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.137276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.153414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.153445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.153462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.169886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.169933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.169951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.183628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.183683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.183702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.198555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.198587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.198604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.209962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.209990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.210005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.223033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.223064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.223082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.239235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.239263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.239294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.683 [2024-12-06 19:25:50.253971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.683 [2024-12-06 19:25:50.254002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.683 [2024-12-06 19:25:50.254019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.268071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.268102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.268119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.281231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.281259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.281291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.296751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.296796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.296821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.308422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.308449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.308478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.322413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.322443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.322460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.338565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.338610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.338629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.354629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.354661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.366195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.366222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.366253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.380290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.380334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.380352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.393423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.393455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.393472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.407829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.407859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.407877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.423716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.423754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.423773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.440131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.440160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.440193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.454086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.454134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.465014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.465042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.465074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.479279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.479306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.479336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.492860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.492891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.492909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.504799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.504858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.942 [2024-12-06 19:25:50.517797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:39.942 [2024-12-06 19:25:50.517829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.942 [2024-12-06 19:25:50.517846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.532478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.532542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.548397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.548428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.548445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.560589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.560619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.560636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.575014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.575061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.575078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.589886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.589917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.589934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 [2024-12-06 19:25:50.601567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f92e0) 00:27:40.202 [2024-12-06 19:25:50.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.202 [2024-12-06 19:25:50.601629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.202 18243.50 IOPS, 71.26 MiB/s 00:27:40.202 Latency(us) 00:27:40.202 [2024-12-06T18:25:50.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.202 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:40.202 nvme0n1 : 2.01 18252.00 71.30 0.00 0.00 7005.77 3422.44 22427.88 00:27:40.202 [2024-12-06T18:25:50.779Z] =================================================================================================================== 00:27:40.202 [2024-12-06T18:25:50.779Z] Total : 18252.00 71.30 0.00 0.00 7005.77 3422.44 22427.88 00:27:40.202 { 00:27:40.202 "results": [ 00:27:40.202 { 00:27:40.202 "job": "nvme0n1", 00:27:40.202 "core_mask": "0x2", 00:27:40.202 "workload": "randread", 00:27:40.202 "status": "finished", 00:27:40.202 "queue_depth": 128, 00:27:40.202 "io_size": 4096, 00:27:40.202 "runtime": 2.006081, 00:27:40.202 "iops": 18252.00477946803, 00:27:40.202 "mibps": 71.29689366979699, 00:27:40.202 "io_failed": 0, 00:27:40.202 "io_timeout": 0, 00:27:40.202 "avg_latency_us": 7005.77059076173, 00:27:40.202 "min_latency_us": 3422.4355555555558, 00:27:40.202 "max_latency_us": 22427.875555555554 00:27:40.202 } 00:27:40.202 ], 00:27:40.202 "core_count": 1 00:27:40.202 } 00:27:40.202 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:40.202 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:40.202 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:40.202 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:40.202 | .driver_specific 00:27:40.202 | .nvme_error 00:27:40.202 | .status_code 00:27:40.202 | .command_transient_transport_error' 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1223913 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1223913 ']' 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1223913 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.460 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223913 00:27:40.461 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.461 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.461 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223913' 00:27:40.461 killing process with pid 1223913 00:27:40.461 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1223913 00:27:40.461 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.461 00:27:40.461 Latency(us) 00:27:40.461 [2024-12-06T18:25:51.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.461 [2024-12-06T18:25:51.038Z] =================================================================================================================== 00:27:40.461 [2024-12-06T18:25:51.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.461 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1223913 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1224444 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1224444 /var/tmp/bperf.sock 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1224444 ']' 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.719 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.719 [2024-12-06 19:25:51.207625] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:40.719 [2024-12-06 19:25:51.207724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224444 ] 00:27:40.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.719 Zero copy mechanism will not be used. 00:27:40.719 [2024-12-06 19:25:51.272892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.977 [2024-12-06 19:25:51.331271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.977 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.977 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:40.977 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.977 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.235 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.801 nvme0n1 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.801 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.801 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.801 Zero copy mechanism will not be used. 00:27:41.801 Running I/O for 2 seconds... 00:27:41.801 [2024-12-06 19:25:52.347117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:41.801 [2024-12-06 19:25:52.347189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.801 [2024-12-06 19:25:52.347213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.801 [2024-12-06 19:25:52.353990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:41.801 [2024-12-06 19:25:52.354026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.801 [2024-12-06 19:25:52.354046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.801 [2024-12-06 19:25:52.361553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:41.801 [2024-12-06 19:25:52.361588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.801 [2024-12-06 19:25:52.361616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.801 [2024-12-06 19:25:52.369432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:41.801 [2024-12-06 19:25:52.369465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.801 [2024-12-06 19:25:52.369484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.801 [2024-12-06 19:25:52.373275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:41.801 [2024-12-06 19:25:52.373306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.801 [2024-12-06 19:25:52.373339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.060 [2024-12-06 19:25:52.380212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.380244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.380277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.386090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.386136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.386156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.390412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.390444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.390463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.395031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.395063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.395081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.400702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.400734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.400752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.405416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.405448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.405466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.410215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.410275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.416121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.416153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.416171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.423608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.423641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.423659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.429829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.429861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.429879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.435429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.435460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.435478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.440774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.440807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.440826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.445401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.445450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.449722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.449754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.449772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.452407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.452437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.452454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.457026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.457058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.457076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.462555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.462587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.462619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.469088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.469118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.476955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.476987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.477020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.483318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.483349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.483366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.491041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.491086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.491104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.498973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.499006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.499024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.506535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.506567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.506601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.514055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.514106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.514131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.521851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.521884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.521902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.528467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.528514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.528532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.061 [2024-12-06 19:25:52.533797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.061 [2024-12-06 19:25:52.533829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.061 [2024-12-06 19:25:52.533847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.538452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.538484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.538502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.543416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.543448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.549542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.549573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.549607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.555236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.555269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.555287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.560613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.560659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.560687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.565970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.566036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.571316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.571347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.571380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.576893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.576926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.576944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.582532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.582581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.582599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.587845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.587876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.587910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.593098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.593129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.593161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.598214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.598245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.598278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.604348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.604382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.604401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.608489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.608521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.608562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.613841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.613874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.613892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.619626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.619680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.619715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.624794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.624826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.624844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.062 [2024-12-06 19:25:52.630352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.062 [2024-12-06 19:25:52.630385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.062 [2024-12-06 19:25:52.630403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.636443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.636476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.636494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.640194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.640224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.640242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.644695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.644726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.644744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.649175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.649205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.653640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.653699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.658165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.658210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.658228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.662833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.662865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.662884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.667513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.667543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.667574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.672806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.672838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.672856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.677429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.677459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.677476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.682151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.682180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.682213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.686782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.686811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.686845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.691392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.691421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.695818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.695848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.322 [2024-12-06 19:25:52.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.322 [2024-12-06 19:25:52.701464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.322 [2024-12-06 19:25:52.701495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.701530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.706703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.706734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.706752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.712363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.712394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.716911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.716975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.721543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.721573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.721605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.726239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.726270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.726288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.730930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.730961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.730979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.735690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.735722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.735746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.741101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.741152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.746577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.746623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.746641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.752005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.752037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.752055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.757324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.757356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.762489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.762521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.762539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.767822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.767854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.767872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.773210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.773261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.778251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.778283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.778302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.783278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.783317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.783336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.788352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.788384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.788402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.793775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.793806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.793824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.799227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.799260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.799278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.804635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.804675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.804696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.810183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.810215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.810233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.815571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.815605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.815624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.820961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.820994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.826364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.826397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.832788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.832829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.832848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.838266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.838298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.323 [2024-12-06 19:25:52.838316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.323 [2024-12-06 19:25:52.843827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.323 [2024-12-06 19:25:52.843860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.843880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.849786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.849836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.856010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.856044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.856062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.861307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.861341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.861359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.866093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.866125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.866143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.872065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.872097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.872116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.877559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.877590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.880596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.880659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.886084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.886131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.886150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.891597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.891629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.891661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.324 [2024-12-06 19:25:52.896673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.324 [2024-12-06 19:25:52.896705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.324 [2024-12-06 19:25:52.896724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.901476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.901524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.901542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.905995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.906029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.906048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.911184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.911236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.916174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.916206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.916223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.921440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.921479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.926521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.926572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.583 [2024-12-06 19:25:52.931349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.583 [2024-12-06 19:25:52.931381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.583 [2024-12-06 19:25:52.931399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.936381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.936413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.936431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.941473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.941522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.941540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.946721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.946754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.946772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.951939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.951970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.956996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.957028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.957046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.962036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.962068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.962086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.966753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.966786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.966804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.972060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.972093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.972111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.978582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.978614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.978632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.985966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.985998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:52.993070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:52.993103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:52.993121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.000787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.000821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.000839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.008829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.008861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.017008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.017041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.017059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.024859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.024900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.024920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.032620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.032653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.032679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.040289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.040321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.040339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.048026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.048059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.048076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.055781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.055813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.055831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.063436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.063468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.063486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.070818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.070851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.070870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.078268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.078300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.078317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.085906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.085938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.085956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.093529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.093562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.093580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.101151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.101202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.108864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.584 [2024-12-06 19:25:53.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.584 [2024-12-06 19:25:53.108915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.584 [2024-12-06 19:25:53.113287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.113319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.113336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.116116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.116147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.116165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.121584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.121631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.121649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.126657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.126715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.126737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.132941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.132973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.132991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.140505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.140538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.140569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.146419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.146450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.152288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.152334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.152351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.585 [2024-12-06 19:25:53.157647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.585 [2024-12-06 19:25:53.157689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.585 [2024-12-06 19:25:53.157708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.162466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.162498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.162530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.168464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.168496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.168514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.174259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.174291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.174309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.180501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.180548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.180566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.186172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.186204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.186223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.844 [2024-12-06 19:25:53.191776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.844 [2024-12-06 19:25:53.191815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.844 [2024-12-06 19:25:53.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.197551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.197584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.197603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.202736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.202787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.209435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.209468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.209486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.214989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.215021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.215039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.220025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.220057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.224611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.224660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.230005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.230038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.237032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.237064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.237083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.244789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.244823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.244842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.251872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.251905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.251923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.258737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.258770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.258788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.262548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.262580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.262613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.268810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.268842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.268875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.275903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.275950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.275968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.283895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.283929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.283962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.291772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.291824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.299321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.299369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.299394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.306527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.306560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.306594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.312193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.312224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.312242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.316786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.316818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.316837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.321254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.321285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.325816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.325846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.325864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.330510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.330540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.330573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.335050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.335094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.335110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.339881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.339911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.845 5411.00 IOPS, 676.38 MiB/s [2024-12-06T18:25:53.422Z] [2024-12-06 19:25:53.345367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.345398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.845 [2024-12-06 19:25:53.345416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.845 [2024-12-06 19:25:53.350092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.845 [2024-12-06 19:25:53.350122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.350155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.354685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.354716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.354734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.359202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.359234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.359268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.363832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.363879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.363897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.368641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.368682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.368718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.373107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.373138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.373156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.377872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.377904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.377921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.383750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.383821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.388607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.388639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.388657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.393808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.393840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.393858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.400076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.400108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.400126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.405435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.405485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.410831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.410880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.846 [2024-12-06 19:25:53.416299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:42.846 [2024-12-06 19:25:53.416331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.846 [2024-12-06 19:25:53.416349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.421825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.421858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.421876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.426748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.426780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.426798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.432565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.432604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.432623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.438416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.438480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.443868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.443899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.443917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.448788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.448823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.448840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.454174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.454205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.454222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.459938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.459969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.466504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.466535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.466553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.471936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.471968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.471986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.477262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.477294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.477312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.482423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.482455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.482472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.487828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.487868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.487887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.493078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.493110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.493128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.106 [2024-12-06 19:25:53.496377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.106 [2024-12-06 19:25:53.496408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.106 [2024-12-06 19:25:53.496426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.500042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.500072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.500089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.503586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.503616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.503633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.506322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.506352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.506369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.509834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.509865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.509882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.514254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.514284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.514325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.519389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.519419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.519436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.526048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.526079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.526096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.533793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.533824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.533841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.539410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.539441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.539473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.545198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.545245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.545263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.550271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.550302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.550319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.554925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.554955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.554988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.559942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.560003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.564766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.564803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.564821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.569569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.569599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.569632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.574242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.574286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.574303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.579262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.579293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.579325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.584344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.584389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.584407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.589092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.589122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.593794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.593823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.593855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.598529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.598573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.598589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.603208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.603238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.603255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.607681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.607710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.607727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.612577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.612616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.612639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.617095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.617140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.617157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.622026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.622072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.622089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.107 [2024-12-06 19:25:53.626853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.107 [2024-12-06 19:25:53.626885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.107 [2024-12-06 19:25:53.626903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.629973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.630003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.630021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.633722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.633754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.633772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.638450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.638481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.638499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.643682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.643713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.643737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.649246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.649293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.654625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.654657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.660004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.660049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.660066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.666513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.666544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.666576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.672240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.672270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.672288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.108 [2024-12-06 19:25:53.677078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.108 [2024-12-06 19:25:53.677109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.108 [2024-12-06 19:25:53.677127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.681823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.681854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.681871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.686391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.686421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.686438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.691401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.691437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.695967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.700575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.700605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.700622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.705264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.705294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.705311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.709950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.709980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.709997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.715560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.715590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.715608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.720642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.720715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.367 [2024-12-06 19:25:53.726907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.367 [2024-12-06 19:25:53.726954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.367 [2024-12-06 19:25:53.726972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.734366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.734399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.734417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.740234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.740266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.740284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.746009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.746040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.746059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.751176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.751226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.754330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.754360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.754377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.758431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.758461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.758478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.763054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.763084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.763116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.767796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.767826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.767859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.772426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.772456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.772490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.778464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.778496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.778520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.784804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.784836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.790951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.790996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.791013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.796401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.796433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.796451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.801926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.801958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.801976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.807489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.807540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.813223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.813255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.813273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.819579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.819624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.819641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.826073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.826104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.826137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.831871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.831912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.831946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.837764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.837812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.837829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.843332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.843379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.843397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.848787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.848819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.848837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.854419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.854451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.854469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.859829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.859860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.865583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.865617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.865635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.871104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.871136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.871154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.368 [2024-12-06 19:25:53.876842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.368 [2024-12-06 19:25:53.876874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.368 [2024-12-06 19:25:53.876893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.882279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.882326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.886991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.887022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.887039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.891404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.891435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.891468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.895960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.896007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.896023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.900784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.900813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.900830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.905339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.905369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.905401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.909885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.909915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.909932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.914368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.914398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.914416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.918917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.918953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.918971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.923610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.923640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.923657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.929095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.929125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.929157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.933885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.933929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.933946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.369 [2024-12-06 19:25:53.938618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.369 [2024-12-06 19:25:53.938648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.369 [2024-12-06 19:25:53.938672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.943166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.943197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.943214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.947882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.947914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.947932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.952502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.952532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.952564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.957093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.957139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.957156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.961922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.961968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.961985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-06 19:25:53.967511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.631 [2024-12-06 19:25:53.967543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-06 19:25:53.967561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:53.975020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:53.975051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:53.975085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:53.981300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:53.981332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:53.981350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:53.986713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:53.986746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:53.986764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:53.991851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:53.991882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:53.991900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:53.997142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:53.997189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:53.997206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.002323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.002355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.002372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.006843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.006875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.006898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.011037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.011069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.011087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.015897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.015930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.015947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.020765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.020796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.020814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.025835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.025867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.025885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.031039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.031070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.031103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.036377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.036410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.036428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.042432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.042483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.047147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.047194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.047211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.055022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.055074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.055093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.062023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.062052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.062067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.068681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.068712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.068745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.074694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.074726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.074759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.080405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.080453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.080470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.086092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.086137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.086155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.092177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.092207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.092240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.098602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.098650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.098675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.104835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.104883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.104901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.110632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.110673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.110693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.115809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.115843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.115862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-06 19:25:54.120615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.632 [2024-12-06 19:25:54.120647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-06 19:25:54.120675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.126939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.126987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.127005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.132003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.132034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.132052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.136900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.136931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.136949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.141827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.141859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.141876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.147557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.147588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.153827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.153859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.160566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.160599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.160631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.165866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.165898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.165916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.170891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.170942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.175552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.175601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.180951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.180982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.181001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.186444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.186475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.186493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.191924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.191956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.191974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.197303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.197333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.197351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.633 [2024-12-06 19:25:54.200383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.633 [2024-12-06 19:25:54.200420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.633 [2024-12-06 19:25:54.200439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.206477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.206511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.206530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.212377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.212409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.212427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.217212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.217245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.217263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.222548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.222599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.228827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.228860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.228878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.236564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.236610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.236628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.241775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.241808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.241826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.952 [2024-12-06 19:25:54.247693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.952 [2024-12-06 19:25:54.247725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.952 [2024-12-06 19:25:54.247743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.253069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.253101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.253118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.258836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.258868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.258886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.261892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.261937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.267433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.267477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.267496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.272725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.272755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.272789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.278414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.278442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.278475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.283949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.283994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.284012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.289552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.289583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.289601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.295287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.295316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.295354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.302778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.302823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.302842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.307625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.307676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.307696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.313493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.313540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.313558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.318350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.318380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.318414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.323312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.323358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.323376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.329198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.329245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.329261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.334911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.334957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.334974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.953 [2024-12-06 19:25:54.340565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.340604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.953 5652.00 IOPS, 706.50 MiB/s [2024-12-06T18:25:54.530Z] [2024-12-06 19:25:54.347790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976890) 00:27:43.953 [2024-12-06 19:25:54.347823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.953 [2024-12-06 19:25:54.347841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.953 00:27:43.953 Latency(us) 00:27:43.953 [2024-12-06T18:25:54.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:43.954 nvme0n1 : 2.00 5650.26 706.28 0.00 0.00 2827.04 676.60 10485.76 00:27:43.954 [2024-12-06T18:25:54.531Z] =================================================================================================================== 00:27:43.954 [2024-12-06T18:25:54.531Z] Total : 5650.26 706.28 0.00 0.00 2827.04 676.60 10485.76 00:27:43.954 { 00:27:43.954 "results": [ 00:27:43.954 { 00:27:43.954 "job": "nvme0n1", 00:27:43.954 "core_mask": "0x2", 00:27:43.954 "workload": "randread", 00:27:43.954 "status": "finished", 00:27:43.954 "queue_depth": 16, 00:27:43.954 "io_size": 131072, 00:27:43.954 "runtime": 2.003446, 00:27:43.954 "iops": 5650.264594104358, 00:27:43.954 "mibps": 706.2830742630448, 00:27:43.954 "io_failed": 0, 00:27:43.954 "io_timeout": 0, 00:27:43.954 "avg_latency_us": 2827.037650569297, 00:27:43.954 "min_latency_us": 676.5985185185185, 00:27:43.954 "max_latency_us": 10485.76 00:27:43.954 } 00:27:43.954 ], 00:27:43.954 "core_count": 1 00:27:43.954 } 00:27:43.954 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.954 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.954 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:43.954 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.954 | .driver_specific 00:27:43.954 | .nvme_error 00:27:43.954 | .status_code 00:27:43.954 | .command_transient_transport_error' 00:27:44.239 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1224444 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1224444 ']' 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1224444 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224444 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224444' 00:27:44.240 killing process with pid 1224444 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1224444 00:27:44.240 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.240 00:27:44.240 Latency(us) 00:27:44.240 [2024-12-06T18:25:54.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.240 [2024-12-06T18:25:54.817Z] =================================================================================================================== 00:27:44.240 [2024-12-06T18:25:54.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.240 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1224444 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1224858 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1224858 /var/tmp/bperf.sock 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1224858 ']' 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.498 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.498 [2024-12-06 19:25:54.970557] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:44.499 [2024-12-06 19:25:54.970640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224858 ] 00:27:44.499 [2024-12-06 19:25:55.036116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.756 [2024-12-06 19:25:55.093649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.756 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.756 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:44.756 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.756 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.014 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.272 nvme0n1 00:27:45.272 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:45.530 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.531 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.531 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.531 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.531 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.531 Running I/O for 2 seconds... 00:27:45.531 [2024-12-06 19:25:55.998000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef4f40 00:27:45.531 [2024-12-06 19:25:55.999147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:55.999204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.011157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee49b0 00:27:45.531 [2024-12-06 19:25:56.012259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.022361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef57b0 00:27:45.531 [2024-12-06 19:25:56.023999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.024029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.034415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eed4e8 00:27:45.531 [2024-12-06 19:25:56.035793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.035823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.046468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef0ff8 00:27:45.531 [2024-12-06 19:25:56.047638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.047689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.059966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eeff18 00:27:45.531 [2024-12-06 19:25:56.061768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.061799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.068494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6b70 00:27:45.531 [2024-12-06 19:25:56.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.069373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.082959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6b70 00:27:45.531 [2024-12-06 19:25:56.084387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.084432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.094661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:45.531 [2024-12-06 19:25:56.095691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.531 [2024-12-06 19:25:56.095721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:45.531 [2024-12-06 19:25:56.106272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee49b0 00:27:45.790 [2024-12-06 19:25:56.107738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.107768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.118189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016edece0 00:27:45.790 [2024-12-06 19:25:56.119354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.119398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.131712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016edf550 00:27:45.790 [2024-12-06 19:25:56.133362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.133407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.143919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eefae0 00:27:45.790 [2024-12-06 19:25:56.145766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.145811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.152199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef8a50 00:27:45.790 [2024-12-06 19:25:56.153152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.153194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.164538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef6020 00:27:45.790 [2024-12-06 19:25:56.165633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.165684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.176408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee3d08 00:27:45.790 [2024-12-06 19:25:56.177600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.177630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.187692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef0350 00:27:45.790 [2024-12-06 19:25:56.188853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.188883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.201863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee23b8 00:27:45.790 [2024-12-06 19:25:56.203554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.203599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.210027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efb8b8 00:27:45.790 [2024-12-06 19:25:56.210929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.210974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.222335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eff3c8 00:27:45.790 [2024-12-06 19:25:56.223393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.223438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.236385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eea248 00:27:45.790 [2024-12-06 19:25:56.238007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.238036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.248421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee84c0 00:27:45.790 [2024-12-06 19:25:56.250179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.250210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.257095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef6458 00:27:45.790 [2024-12-06 19:25:56.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.258102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.271249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef8a50 00:27:45.790 [2024-12-06 19:25:56.272724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.272755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.282153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef6020 00:27:45.790 [2024-12-06 19:25:56.283392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.283431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.293715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee3d08 00:27:45.790 [2024-12-06 19:25:56.294954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.294997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.305696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef5378 00:27:45.790 [2024-12-06 19:25:56.306899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.306943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.318228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef31b8 00:27:45.790 [2024-12-06 19:25:56.319508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.319552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.330389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef92c0 00:27:45.790 [2024-12-06 19:25:56.331834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.790 [2024-12-06 19:25:56.331863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:45.790 [2024-12-06 19:25:56.341972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef8e88 00:27:45.791 [2024-12-06 19:25:56.343436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.791 [2024-12-06 19:25:56.343479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:45.791 [2024-12-06 19:25:56.353045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eee5c8 00:27:45.791 [2024-12-06 19:25:56.354237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.791 [2024-12-06 19:25:56.354280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.791 [2024-12-06 19:25:56.364930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6738 00:27:45.791 [2024-12-06 19:25:56.365888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.791 [2024-12-06 19:25:56.365917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.376479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee8d30 00:27:46.049 [2024-12-06 19:25:56.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.049 [2024-12-06 19:25:56.377889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.388242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efa3a0 00:27:46.049 [2024-12-06 19:25:56.389349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.049 [2024-12-06 19:25:56.389392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.400339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efd208 00:27:46.049 [2024-12-06 19:25:56.401538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.049 [2024-12-06 19:25:56.401582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.411337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee3d08 00:27:46.049 [2024-12-06 19:25:56.412399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.049 [2024-12-06 19:25:56.412443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.423161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efcdd0 00:27:46.049 [2024-12-06 19:25:56.423842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.049 [2024-12-06 19:25:56.423872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:46.049 [2024-12-06 19:25:56.436794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016edf550 00:27:46.050 [2024-12-06 19:25:56.438393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.438436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.448597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef3a28 00:27:46.050 [2024-12-06 19:25:56.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.450172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.459661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efb480 00:27:46.050 [2024-12-06 19:25:56.461082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.461126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.470963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef4b08 00:27:46.050 [2024-12-06 19:25:56.472012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.472042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.484639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eddc00 00:27:46.050 [2024-12-06 19:25:56.486452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.486496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.493014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ede038 00:27:46.050 [2024-12-06 19:25:56.493934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.493979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.505268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6300 00:27:46.050 [2024-12-06 19:25:56.506444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.506474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.517600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eeff18 00:27:46.050 [2024-12-06 19:25:56.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.518876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.529585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee8088 00:27:46.050 [2024-12-06 19:25:56.530859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.530904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.543499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef1868 00:27:46.050 [2024-12-06 19:25:56.545306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.545349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.551856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee27f0 00:27:46.050 [2024-12-06 19:25:56.552637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.552688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.564270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eddc00 00:27:46.050 [2024-12-06 19:25:56.565280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.565324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.575461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee5a90 00:27:46.050 [2024-12-06 19:25:56.576467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.576511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.589695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efcdd0 00:27:46.050 [2024-12-06 19:25:56.591249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.601549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee49b0 00:27:46.050 [2024-12-06 19:25:56.603139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.603182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.612211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef6cc8 00:27:46.050 [2024-12-06 19:25:56.613553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.613583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:46.050 [2024-12-06 19:25:56.623728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef4298 00:27:46.050 [2024-12-06 19:25:56.625053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.050 [2024-12-06 19:25:56.625084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.638042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6b70 00:27:46.309 [2024-12-06 19:25:56.639957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.646321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef5378 00:27:46.309 [2024-12-06 19:25:56.647238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.647283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.658031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eecc78 00:27:46.309 [2024-12-06 19:25:56.659013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.659057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.669990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee0a68 00:27:46.309 [2024-12-06 19:25:56.671005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.671035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.684330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef81e0 00:27:46.309 [2024-12-06 19:25:56.686080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.686125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.696485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef1ca0 00:27:46.309 [2024-12-06 19:25:56.698467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.698511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.704903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efeb58 00:27:46.309 [2024-12-06 19:25:56.706080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.706110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.717047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef7970 00:27:46.309 [2024-12-06 19:25:56.717741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.717771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.729482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6738 00:27:46.309 [2024-12-06 19:25:56.730328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.743555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eebfd0 00:27:46.309 [2024-12-06 19:25:56.745491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.745535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.752211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee7818 00:27:46.309 [2024-12-06 19:25:56.753003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.753046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.763844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eea248 00:27:46.309 [2024-12-06 19:25:56.764724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.764757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.776351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efda78 00:27:46.309 [2024-12-06 19:25:56.777350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.777395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.788190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee99d8 00:27:46.309 [2024-12-06 19:25:56.789097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.789140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.802168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efa3a0 00:27:46.309 [2024-12-06 19:25:56.803563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.803608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.813092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efc128 00:27:46.309 [2024-12-06 19:25:56.814337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.814382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.827110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee8088 00:27:46.309 [2024-12-06 19:25:56.828945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.828990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.835512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eed920 00:27:46.309 [2024-12-06 19:25:56.836462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.836503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.849599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee5220 00:27:46.309 [2024-12-06 19:25:56.850883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.850928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.860399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef2510 00:27:46.309 [2024-12-06 19:25:56.861623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.861674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.872735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee0630 00:27:46.309 [2024-12-06 19:25:56.874096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.309 [2024-12-06 19:25:56.874138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.309 [2024-12-06 19:25:56.885156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efac10 00:27:46.590 [2024-12-06 19:25:56.886792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.886822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.896986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee0630 00:27:46.590 [2024-12-06 19:25:56.898356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.898406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.908178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee1b48 00:27:46.590 [2024-12-06 19:25:56.909542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.909571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.919823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6fa8 00:27:46.590 [2024-12-06 19:25:56.921006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.921049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.932235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef6020 00:27:46.590 [2024-12-06 19:25:56.933473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.933517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.943219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef5be8 00:27:46.590 [2024-12-06 19:25:56.944293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.944336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.954210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee0a68 00:27:46.590 [2024-12-06 19:25:56.955215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.955258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.966504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efc560 00:27:46.590 [2024-12-06 19:25:56.967540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:56.977828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efe2e8 00:27:46.590 [2024-12-06 19:25:56.978847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.978891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:46.590 21582.00 IOPS, 84.30 MiB/s [2024-12-06T18:25:57.167Z] [2024-12-06 19:25:56.991242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.590 [2024-12-06 19:25:56.992430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:56.992473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.002086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eed4e8 00:27:46.590 [2024-12-06 19:25:57.003181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.003224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.015784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eea680 00:27:46.590 [2024-12-06 19:25:57.017281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.017327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.026833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef7538 00:27:46.590 [2024-12-06 19:25:57.028189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.028234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.037765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eee190 00:27:46.590 [2024-12-06 19:25:57.038993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.039023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.049476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eeaef0 00:27:46.590 [2024-12-06 19:25:57.050530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.050574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.061637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef7da8 00:27:46.590 [2024-12-06 19:25:57.062647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.062703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.073833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eeea00 00:27:46.590 [2024-12-06 19:25:57.074965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.086145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee6b70 00:27:46.590 [2024-12-06 19:25:57.087446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.087489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.096362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef4f40 00:27:46.590 [2024-12-06 19:25:57.097269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.097313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.108470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee8088 00:27:46.590 [2024-12-06 19:25:57.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.590 [2024-12-06 19:25:57.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:46.590 [2024-12-06 19:25:57.120804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efef90 00:27:46.590 [2024-12-06 19:25:57.121523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.591 [2024-12-06 19:25:57.121552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:46.591 [2024-12-06 19:25:57.133065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016efb480 00:27:46.591 [2024-12-06 19:25:57.134011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.591 [2024-12-06 19:25:57.134040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:46.848 [2024-12-06 19:25:57.144638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ee84c0 00:27:46.848 [2024-12-06 19:25:57.145955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.848 [2024-12-06 19:25:57.145985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:46.848 [2024-12-06 19:25:57.156626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eea248 00:27:46.848 [2024-12-06 19:25:57.157750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.848 [2024-12-06 19:25:57.157794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:46.848 [2024-12-06 19:25:57.169066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eec408 00:27:46.848 [2024-12-06 19:25:57.170596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.848 [2024-12-06 19:25:57.170640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:46.848 [2024-12-06 19:25:57.181567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016eea680 00:27:46.848 [2024-12-06 19:25:57.183158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.848 [2024-12-06 19:25:57.183202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:46.848 [2024-12-06 19:25:57.193399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.193685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.193715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.207628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.207891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.207926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.221626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.221941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.221984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.235926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.236272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.236301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.250113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.250448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.250477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.264333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.264636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.264693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.278641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.278901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.278930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.292693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.292966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.293010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.306957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.307290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.307318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.321229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.321501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.321544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.335185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.335458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.335485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.349053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.349326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.363180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.363463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.363506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.377391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.377698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.391465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.391797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.391842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.405636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.405950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.405993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:46.849 [2024-12-06 19:25:57.419785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:46.849 [2024-12-06 19:25:57.420050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.849 [2024-12-06 19:25:57.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.433283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.433574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.433616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.447382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.447675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.447704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.461672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.462114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.475826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.476077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.490102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.490376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.490420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.504266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.504653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.518316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.518606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.518647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.532645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.532908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.532937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.546538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.546805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.546835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.560905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.561362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.575049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.575324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.575371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.589280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.589548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.589590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.603430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.603748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.603792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.617584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.617862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.617906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.631699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.631958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.632000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.645868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.646159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.646202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.660051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.660322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.660364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.106 [2024-12-06 19:25:57.674299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.106 [2024-12-06 19:25:57.674564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.106 [2024-12-06 19:25:57.674607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.688032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.688369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.688398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.701959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.702279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.716072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.716344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.716386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.730198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.730403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.744088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.744359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.744401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.758388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.758692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.758731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.772376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.772607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.772635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.786342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.786635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.786691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.800488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.800774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.814586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.814853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.814882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.828723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.829040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.829068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.842741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.843001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.843029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.856743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.857040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.857068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.870801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.871073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.871101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.884987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.885379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.885421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.898984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.364 [2024-12-06 19:25:57.899331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.364 [2024-12-06 19:25:57.899360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.364 [2024-12-06 19:25:57.912945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.365 [2024-12-06 19:25:57.913232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-12-06 19:25:57.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.365 [2024-12-06 19:25:57.927213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.365 [2024-12-06 19:25:57.927480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-12-06 19:25:57.927524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.621 [2024-12-06 19:25:57.940830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.621 [2024-12-06 19:25:57.941079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.621 [2024-12-06 19:25:57.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.621 [2024-12-06 19:25:57.954176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.621 [2024-12-06 19:25:57.954451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.621 [2024-12-06 19:25:57.954493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.621 [2024-12-06 19:25:57.968134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.621 [2024-12-06 19:25:57.968477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.621 [2024-12-06 19:25:57.968506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.621 [2024-12-06 19:25:57.982423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1532e30) with pdu=0x200016ef9b30 00:27:47.621 [2024-12-06 19:25:57.982761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.621 [2024-12-06 19:25:57.982791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:47.621 20210.00 IOPS, 78.95 MiB/s 00:27:47.621 Latency(us) 00:27:47.621 [2024-12-06T18:25:58.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.621 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.621 nvme0n1 : 2.01 20197.23 78.90 0.00 0.00 6322.78 2500.08 15825.73 00:27:47.621 [2024-12-06T18:25:58.198Z] =================================================================================================================== 00:27:47.621 [2024-12-06T18:25:58.198Z] Total : 20197.23 78.90 0.00 0.00 6322.78 2500.08 15825.73 00:27:47.621 { 00:27:47.621 "results": [ 00:27:47.621 { 00:27:47.621 "job": "nvme0n1", 00:27:47.622 "core_mask": "0x2", 00:27:47.622 "workload": "randwrite", 00:27:47.622 "status": "finished", 00:27:47.622 "queue_depth": 128, 00:27:47.622 "io_size": 4096, 00:27:47.622 "runtime": 2.00681, 00:27:47.622 "iops": 20197.22843717143, 00:27:47.622 "mibps": 78.8954235827009, 00:27:47.622 "io_failed": 0, 00:27:47.622 "io_timeout": 0, 00:27:47.622 "avg_latency_us": 6322.783555891824, 00:27:47.622 "min_latency_us": 2500.077037037037, 00:27:47.622 "max_latency_us": 15825.730370370371 00:27:47.622 } 00:27:47.622 ], 00:27:47.622 "core_count": 1 00:27:47.622 } 00:27:47.622 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.622 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.622 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.622 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.622 | .driver_specific 00:27:47.622 | .nvme_error 00:27:47.622 | .status_code 00:27:47.622 | .command_transient_transport_error' 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1224858 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1224858 ']' 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1224858 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224858 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224858' 00:27:47.878 killing process with pid 1224858 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1224858 00:27:47.878 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.878 00:27:47.878 Latency(us) 00:27:47.878 [2024-12-06T18:25:58.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.878 [2024-12-06T18:25:58.455Z] =================================================================================================================== 00:27:47.878 [2024-12-06T18:25:58.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.878 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1224858 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1225263 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1225263 /var/tmp/bperf.sock 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1225263 ']' 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.135 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.135 [2024-12-06 19:25:58.587817] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:48.135 [2024-12-06 19:25:58.587902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225263 ] 00:27:48.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.135 Zero copy mechanism will not be used. 00:27:48.135 [2024-12-06 19:25:58.651999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.135 [2024-12-06 19:25:58.707477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.392 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.392 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:48.392 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.392 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.697 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.953 nvme0n1 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:48.953 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.210 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.210 Zero copy mechanism will not be used. 00:27:49.210 Running I/O for 2 seconds... 00:27:49.210 [2024-12-06 19:25:59.619434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.619554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.619597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.625394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.625516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.625550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.631163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.631303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.631334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.637767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.637887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.637917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.643993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.644154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.644184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.649877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.649987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.650016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.654968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.660150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.660280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.660308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.665136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.665242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.210 [2024-12-06 19:25:59.665271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.210 [2024-12-06 19:25:59.670310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.210 [2024-12-06 19:25:59.670410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.670439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.675405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.675505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.675534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.680361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.680498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.680527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.685988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.686060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.686087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.692152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.692224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.692253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.697414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.697493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.697519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.702471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.702547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.702574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.707639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.707719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.707746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.712871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.712946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.712974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.717841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.717921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.717948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.723168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.723237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.728267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.728337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.728364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.733530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.733613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.733649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.738977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.739049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.744471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.744552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.744580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.750038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.750105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.750132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.754952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.755056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.760008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.760142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.760170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.765414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.765712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.765743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.771362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.771749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.777389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.777743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.211 [2024-12-06 19:25:59.784335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.211 [2024-12-06 19:25:59.784599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.211 [2024-12-06 19:25:59.784644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.789768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.790057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.790085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.795415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.795735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.795765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.801226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.801511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.801540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.806810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.807099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.807127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.812405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.812722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.818085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.818404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.818432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.823771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.823988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.824018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.829373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.829624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.829653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.834920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.835215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.835243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.840547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.469 [2024-12-06 19:25:59.840830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.469 [2024-12-06 19:25:59.840859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.469 [2024-12-06 19:25:59.846143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.846409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.846438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.851738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.852039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.852069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.857343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.857598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.857628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.862903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.863196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.863241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.868678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.868930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.868959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.874328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.874660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.874701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.880168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.880492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.880528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.885872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.886189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.886219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.892393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.892654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.892693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.898298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.898588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.904710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.904973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.905003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.911270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.911595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.911623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.917952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.918250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.918280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.924541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.924887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.924917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.930921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.931258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.931293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.937499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.937791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.937820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.944004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.944256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.944285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.950322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.950596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.950625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.956576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.956887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.956917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.962867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.963121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.963151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.969315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.969593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.969623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.975650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.470 [2024-12-06 19:25:59.975990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.470 [2024-12-06 19:25:59.976019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.470 [2024-12-06 19:25:59.982536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:25:59.982894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:25:59.982923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:25:59.988637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:25:59.988897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:25:59.988927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:25:59.993702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:25:59.993926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:25:59.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:25:59.998844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:25:59.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:25:59.999136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.004639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.004885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.004916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.009053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.009299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.009329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.015051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.015245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.015279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.021290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.021525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.027604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.027798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.027829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.033832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.034020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.034050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.471 [2024-12-06 19:26:00.040127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.471 [2024-12-06 19:26:00.040329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.471 [2024-12-06 19:26:00.040369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.046896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.047083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.047117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.053296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.053473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.053503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.059428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.059659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.059698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.065118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.065379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.070378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.070607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.070637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.074771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.075003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.079251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.079460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.079488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.083747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.083974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.084006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.088301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.088493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.088521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.093360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.093639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.093677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.099016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.099265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.099294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.105007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.105210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.105240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.110825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.729 [2024-12-06 19:26:00.111039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.729 [2024-12-06 19:26:00.111069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.729 [2024-12-06 19:26:00.116139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.116406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.121629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.121885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.121914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.127064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.127382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.127412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.132356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.132673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.132704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.137728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.137995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.138024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.143068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.143338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.143368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.148393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.148676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.148707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.153744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.153997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.154027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.159032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.159343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.159372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.164184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.164420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.164449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.169502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.169767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.169796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.175060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.175383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.181088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.181331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.186446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.186685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.186714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.191588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.191900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.191930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.196930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.197250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.202278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.202518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.202553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.207617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.207905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.207935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.213054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.213331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.213361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.218390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.218687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.218716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.730 [2024-12-06 19:26:00.223595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.730 [2024-12-06 19:26:00.223916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.730 [2024-12-06 19:26:00.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.228842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.229090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.229119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.234130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.234396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.234426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.239255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.239454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.239482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.244410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.244677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.244706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.249749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.250010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.254649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.254825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.254854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.259408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.259595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.264499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.264809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.264838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.270393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.270640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.270676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.275561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.275744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.280128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.280290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.280319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.284661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.284854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.284883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.289172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.289316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.289345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.293795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.293971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.294000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.298355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.298530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.298559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.731 [2024-12-06 19:26:00.302853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.731 [2024-12-06 19:26:00.303027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.731 [2024-12-06 19:26:00.303056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.307322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.307494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.307523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.311904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.312103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.312137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.316266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.316425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.316454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.320700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.320869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.320899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.325011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.325155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.325185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.329841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.330098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.330127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.335005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.335263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.335292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.340117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.340335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.345800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.345985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.346015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.351095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.351325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.351355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.356223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.356455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.356484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.361400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.366566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.366821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.366852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.371604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.371833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.371863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.376735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.376992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.377023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.381840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.382096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.382127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.386854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.387160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.387192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.392089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.392358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.392389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.397955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.398255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.398285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.402848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.403061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.407359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.407539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.407569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.411717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.411904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.411933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.416366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.416542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.416571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.420932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.421106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.421136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.425617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.425804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.425835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.431033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.431229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.431258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.435741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.435923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.435953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.440251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.440532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.440568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.445321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.445609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.445638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.450424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.450697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.450727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.455402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.455563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.455593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.460447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.460633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.460661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.465661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.465845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.470736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.470913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.470942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.475845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.475996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.476025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.480986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.989 [2024-12-06 19:26:00.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.989 [2024-12-06 19:26:00.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.989 [2024-12-06 19:26:00.485987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.486167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.486196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.491097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.491233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.491262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.496178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.496327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.501237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.501413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.501442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.506315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.506511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.511518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.511710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.511740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.516494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.516656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.521558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.521735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.521764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.526645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.526810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.526839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.531729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.531891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.531921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.536942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.537089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.537119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.542025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.542184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.542215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.547065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.547259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.547288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.552327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.552465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.557397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.557537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.557566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.990 [2024-12-06 19:26:00.562568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:49.990 [2024-12-06 19:26:00.562762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.990 [2024-12-06 19:26:00.562792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.567792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.567955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.567983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.573020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.573205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.578109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.578232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.578262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.583288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.583457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.583486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.588382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.588542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.588572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.593465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.593608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.593637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.598518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.598687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.598717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.603553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.603657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.603693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.608690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.608860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.608890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.613768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.613919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.613948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.248 5759.00 IOPS, 719.88 MiB/s [2024-12-06T18:26:00.825Z] [2024-12-06 19:26:00.620004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.620180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.625053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.625236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.625266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.629517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.629612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.629641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.633784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.633890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.633927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.638071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.638161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.638195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.642254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.642357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.642387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.646484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.646582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.646611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.650685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.650777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.650806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.248 [2024-12-06 19:26:00.654934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.248 [2024-12-06 19:26:00.655022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.248 [2024-12-06 19:26:00.655051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.659195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.659320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.659350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.664217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.664298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.664327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.668941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.669010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.669039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.673685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.673792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.678272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.678346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.678378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.682863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.682960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.687435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.687506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.687534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.692160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.692232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.692259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.696700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.696773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.696807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.701122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.701226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.705700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.705768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.710016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.710094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.710123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.714333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.714432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.718960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.719033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.723577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.723715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.723746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.728230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.728333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.728362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.733431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.733531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.733561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.738325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.738449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.738478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.742735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.742824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.742853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.747110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.747227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.751463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.751596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.751624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.755943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.756053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.756082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.760275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.760376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.760405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.764766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.764857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.764885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.769316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.769384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.769412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.773610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.773703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.773733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.777840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.777937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.782497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.782622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.782650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.787527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.787720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.787750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.792760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.792940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.792969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.798625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.798773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.798803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.803002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.803083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.803112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.807499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.807583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.807612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.811747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.811887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.815982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.816124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.816160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.249 [2024-12-06 19:26:00.820592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.249 [2024-12-06 19:26:00.820704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.249 [2024-12-06 19:26:00.820734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.824985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.825078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.829291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.829389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.829418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.833688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.833785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.833814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.838152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.838317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.838346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.842464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.842582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.846883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.846987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.847015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.851219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.851354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.855575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.855683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.855712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.859969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.860084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.860113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.864417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.864514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.864543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.868965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.869048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.869077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.873375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.873456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.873484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.877832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.877904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.877932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.882220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.882306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.882334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.886629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.886773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.886810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.890944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.891110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.895308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.895416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.895446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.899676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.899817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.899847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.904086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.904197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.904226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.908648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.908762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.908791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.913038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.917484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.917621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.917651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.921889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.922009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.922038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.926205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.926286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.926315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.930433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.930589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.935270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.935424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.935454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.940311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.940498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.940528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.508 [2024-12-06 19:26:00.945448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.508 [2024-12-06 19:26:00.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.508 [2024-12-06 19:26:00.945599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.951201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.951305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.951335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.956246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.956350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.961427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.961596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.966604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.966772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.966802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.971646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.971853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.971882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.976778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.976930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.976959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.981838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.982015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.982045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.986917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.987081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.987110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.992027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.992166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.992195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:00.997219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:00.997375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:00.997405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.002221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.002359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.006652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.006793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.006822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.011737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.011901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.016825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.016984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.021951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.022159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.022189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.027085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.027238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.027266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.031901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.032017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.032046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.036154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.036250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.036279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.040400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.040520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.040549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.045125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.045254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.045283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.049980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.050125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.050154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.055767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.055925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.055954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.060789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.060958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.060994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.065926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.066145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.066175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.070955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.071105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.071134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.076080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.076257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.076286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.509 [2024-12-06 19:26:01.081272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.509 [2024-12-06 19:26:01.081430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.509 [2024-12-06 19:26:01.081460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.086103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.086252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.086282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.090484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.090616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.090645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.094845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.095012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.099951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.100042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.104300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.104439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.104469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.108521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.108644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.108680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.112823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.112984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.113014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.117509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.117709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.117740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.122605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.122744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.122773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.128448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.128557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.133524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.133650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.137794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.142325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.142437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.142469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.146810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.146959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.151257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.151400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.151431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.155762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.155901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.155930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.160305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.160455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.766 [2024-12-06 19:26:01.160485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.766 [2024-12-06 19:26:01.164631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.766 [2024-12-06 19:26:01.164800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.164830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.169088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.169252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.169280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.173586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.173732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.173763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.177965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.178095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.178124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.182330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.182442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.182478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.186852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.186933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.186961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.191214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.191343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.191371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.196018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.196190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.196219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.201107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.201249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.201280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.206762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.206943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.206973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.211916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.212074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.212103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.216259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.216410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.220688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.220831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.220860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.225140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.225277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.229471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.229627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.233800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.233909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.233938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.238162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.238282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.238311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.243080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.243266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.243296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.248220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.248363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.248392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.253573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.253835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.259435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.259601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.264333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.264470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.268657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.268782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.268811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.272866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.273042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.273070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.277496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.277678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.277708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.282606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.282691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.286804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.767 [2024-12-06 19:26:01.286895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.767 [2024-12-06 19:26:01.286923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.767 [2024-12-06 19:26:01.291062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.291184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.291213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.295281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.295392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.295421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.299472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.299585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.299613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.303612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.303742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.303777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.307810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.307925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.307954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.312036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.312190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.312220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.316246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.316342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.316370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.320457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.320553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.320583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.324710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.324801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.324829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.328895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.329014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.329043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.333105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.333217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.337292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.337387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.768 [2024-12-06 19:26:01.341471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:50.768 [2024-12-06 19:26:01.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.768 [2024-12-06 19:26:01.341620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.025 [2024-12-06 19:26:01.345730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.025 [2024-12-06 19:26:01.345849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.025 [2024-12-06 19:26:01.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.025 [2024-12-06 19:26:01.349939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.025 [2024-12-06 19:26:01.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.354278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.354370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.358827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.358942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.358971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.363575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.363717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.363747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.368047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.368138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.368180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.373050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.373204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.373233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.378377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.378495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.384516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.384779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.389979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.390249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.390280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.395164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.395360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.395393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.400233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.400407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.400438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.404617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.404774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.404804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.409732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.409998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.415376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.415653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.415691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.420947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.421148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.421178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.425718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.425870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.425904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.430341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.430494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.430526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.434811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.434991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.435021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.439309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.439496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.439525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.443875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.444035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.444065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.448266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.448433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.448463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.453273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.453492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.453521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.459138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.459297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.459328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.463832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.464055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.468367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.468539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.468570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.472898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.473092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.477329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.477499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.477528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.481781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.481948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.481978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.486464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.486619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.491015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.491242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.491272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.495537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.495759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.495790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.500058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.500229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.500258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.504528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.504726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.504757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.509071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.509220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.509249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.513502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.026 [2024-12-06 19:26:01.513658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.026 [2024-12-06 19:26:01.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.026 [2024-12-06 19:26:01.518416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.518587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.523621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.523795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.523825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.528886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.529167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.529197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.534052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.534220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.534250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.539163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.539434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.539464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.544116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.544311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.544340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.549648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.549787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.549821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.555354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.555467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.555496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.560493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.560560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.560589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.565011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.565082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.565110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.569205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.569311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.573463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.573560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.573589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.578378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.578572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.578601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.583493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.583634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.583671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.588566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.588703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.588732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.594264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.594465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.027 [2024-12-06 19:26:01.599989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.027 [2024-12-06 19:26:01.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.027 [2024-12-06 19:26:01.600184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.285 [2024-12-06 19:26:01.606270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.285 [2024-12-06 19:26:01.606410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.285 [2024-12-06 19:26:01.606440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.285 [2024-12-06 19:26:01.612152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.285 [2024-12-06 19:26:01.612376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.285 [2024-12-06 19:26:01.612405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.285 [2024-12-06 19:26:01.618366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1533170) with pdu=0x200016eff3c8 00:27:51.285 [2024-12-06 19:26:01.618479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.285 [2024-12-06 19:26:01.618509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.285 6174.00 IOPS, 771.75 MiB/s 00:27:51.285 Latency(us) 00:27:51.285 [2024-12-06T18:26:01.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:51.285 nvme0n1 : 2.00 6169.27 771.16 0.00 0.00 2586.06 1614.13 7184.69 00:27:51.285 [2024-12-06T18:26:01.862Z] =================================================================================================================== 00:27:51.285 [2024-12-06T18:26:01.862Z] Total : 6169.27 771.16 0.00 0.00 2586.06 1614.13 7184.69 00:27:51.285 { 00:27:51.285 "results": [ 00:27:51.285 { 00:27:51.285 "job": "nvme0n1", 00:27:51.285 "core_mask": "0x2", 00:27:51.285 "workload": "randwrite", 00:27:51.285 "status": "finished", 00:27:51.285 "queue_depth": 16, 00:27:51.285 "io_size": 131072, 00:27:51.285 "runtime": 2.004774, 00:27:51.285 "iops": 6169.273943097825, 00:27:51.285 "mibps": 771.1592428872282, 00:27:51.285 "io_failed": 0, 00:27:51.285 "io_timeout": 0, 00:27:51.285 "avg_latency_us": 2586.0638934406593, 00:27:51.285 "min_latency_us": 1614.1274074074074, 00:27:51.285 "max_latency_us": 7184.687407407408 00:27:51.285 } 00:27:51.285 ], 00:27:51.285 "core_count": 1 00:27:51.285 } 00:27:51.285 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:51.285 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:51.285 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:51.285 | .driver_specific 00:27:51.285 | .nvme_error 00:27:51.285 | .status_code 00:27:51.285 | .command_transient_transport_error' 00:27:51.285 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1225263 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1225263 ']' 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1225263 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225263 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225263' 00:27:51.544 killing process with pid 1225263 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1225263 00:27:51.544 Received shutdown signal, test time was about 2.000000 seconds 00:27:51.544 00:27:51.544 Latency(us) 00:27:51.544 [2024-12-06T18:26:02.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.544 [2024-12-06T18:26:02.121Z] =================================================================================================================== 00:27:51.544 [2024-12-06T18:26:02.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.544 19:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1225263 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1223892 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1223892 ']' 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1223892 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223892 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223892' 00:27:51.802 killing process with pid 1223892 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1223892 00:27:51.802 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1223892 00:27:52.062 00:27:52.062 real 0m15.468s 00:27:52.062 user 0m31.066s 00:27:52.062 sys 0m4.279s 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.062 ************************************ 00:27:52.062 END TEST nvmf_digest_error 00:27:52.062 ************************************ 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.062 rmmod nvme_tcp 00:27:52.062 rmmod nvme_fabrics 00:27:52.062 rmmod nvme_keyring 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1223892 ']' 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1223892 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1223892 ']' 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1223892 00:27:52.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1223892) - No such process 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1223892 is not found' 00:27:52.062 Process with pid 1223892 is not found 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.062 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.603 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.603 00:27:54.603 real 0m35.856s 00:27:54.603 user 1m3.530s 00:27:54.603 sys 0m10.179s 00:27:54.603 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.603 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.603 ************************************ 00:27:54.603 END TEST nvmf_digest 00:27:54.604 ************************************ 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.604 ************************************ 00:27:54.604 START TEST nvmf_bdevperf 00:27:54.604 ************************************ 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:54.604 * Looking for test storage... 00:27:54.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.604 --rc genhtml_branch_coverage=1 00:27:54.604 --rc genhtml_function_coverage=1 00:27:54.604 --rc genhtml_legend=1 00:27:54.604 --rc geninfo_all_blocks=1 00:27:54.604 --rc geninfo_unexecuted_blocks=1 00:27:54.604 00:27:54.604 ' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.604 --rc genhtml_branch_coverage=1 00:27:54.604 --rc genhtml_function_coverage=1 00:27:54.604 --rc genhtml_legend=1 00:27:54.604 --rc geninfo_all_blocks=1 00:27:54.604 --rc geninfo_unexecuted_blocks=1 00:27:54.604 00:27:54.604 ' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.604 --rc genhtml_branch_coverage=1 00:27:54.604 --rc genhtml_function_coverage=1 00:27:54.604 --rc genhtml_legend=1 00:27:54.604 --rc geninfo_all_blocks=1 00:27:54.604 --rc geninfo_unexecuted_blocks=1 00:27:54.604 00:27:54.604 ' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.604 --rc genhtml_branch_coverage=1 00:27:54.604 --rc genhtml_function_coverage=1 00:27:54.604 --rc genhtml_legend=1 00:27:54.604 --rc geninfo_all_blocks=1 00:27:54.604 --rc geninfo_unexecuted_blocks=1 00:27:54.604 00:27:54.604 ' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:54.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.604 19:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:56.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:56.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:56.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:56.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.505 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:27:56.506 00:27:56.506 --- 10.0.0.2 ping statistics --- 00:27:56.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.506 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:56.506 00:27:56.506 --- 10.0.0.1 ping statistics --- 00:27:56.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.506 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1227681 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1227681 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1227681 ']' 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.506 19:26:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.506 [2024-12-06 19:26:06.999208] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:56.506 [2024-12-06 19:26:06.999284] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.506 [2024-12-06 19:26:07.075804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.764 [2024-12-06 19:26:07.136786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.764 [2024-12-06 19:26:07.136851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.764 [2024-12-06 19:26:07.136881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.764 [2024-12-06 19:26:07.136893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.764 [2024-12-06 19:26:07.136903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.764 [2024-12-06 19:26:07.138470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.764 [2024-12-06 19:26:07.138535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.764 [2024-12-06 19:26:07.138539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.764 [2024-12-06 19:26:07.292396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.764 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.022 Malloc0 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.022 [2024-12-06 19:26:07.360588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:57.022 { 00:27:57.022 "params": { 00:27:57.022 "name": "Nvme$subsystem", 00:27:57.022 "trtype": "$TEST_TRANSPORT", 00:27:57.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.022 "adrfam": "ipv4", 00:27:57.022 "trsvcid": "$NVMF_PORT", 00:27:57.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.022 "hdgst": ${hdgst:-false}, 00:27:57.022 "ddgst": ${ddgst:-false} 00:27:57.022 }, 00:27:57.022 "method": "bdev_nvme_attach_controller" 00:27:57.022 } 00:27:57.022 EOF 00:27:57.022 )") 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:57.022 19:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:57.022 "params": { 00:27:57.022 "name": "Nvme1", 00:27:57.022 "trtype": "tcp", 00:27:57.022 "traddr": "10.0.0.2", 00:27:57.022 "adrfam": "ipv4", 00:27:57.022 "trsvcid": "4420", 00:27:57.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.022 "hdgst": false, 00:27:57.022 "ddgst": false 00:27:57.022 }, 00:27:57.022 "method": "bdev_nvme_attach_controller" 00:27:57.022 }' 00:27:57.022 [2024-12-06 19:26:07.412035] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:57.022 [2024-12-06 19:26:07.412102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227765 ] 00:27:57.022 [2024-12-06 19:26:07.480126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.022 [2024-12-06 19:26:07.540671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.587 Running I/O for 1 seconds... 00:27:58.519 8204.00 IOPS, 32.05 MiB/s 00:27:58.519 Latency(us) 00:27:58.519 [2024-12-06T18:26:09.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.519 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:58.519 Verification LBA range: start 0x0 length 0x4000 00:27:58.520 Nvme1n1 : 1.01 8282.64 32.35 0.00 0.00 15375.97 1601.99 20971.52 00:27:58.520 [2024-12-06T18:26:09.097Z] =================================================================================================================== 00:27:58.520 [2024-12-06T18:26:09.097Z] Total : 8282.64 32.35 0.00 0.00 15375.97 1601.99 20971.52 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1227910 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.777 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.777 { 00:27:58.777 "params": { 00:27:58.777 "name": "Nvme$subsystem", 00:27:58.777 "trtype": "$TEST_TRANSPORT", 00:27:58.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.777 "adrfam": "ipv4", 00:27:58.777 "trsvcid": "$NVMF_PORT", 00:27:58.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.777 "hdgst": ${hdgst:-false}, 00:27:58.777 "ddgst": ${ddgst:-false} 00:27:58.777 }, 00:27:58.778 "method": "bdev_nvme_attach_controller" 00:27:58.778 } 00:27:58.778 EOF 00:27:58.778 )") 00:27:58.778 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:58.778 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:58.778 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:58.778 19:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:58.778 "params": { 00:27:58.778 "name": "Nvme1", 00:27:58.778 "trtype": "tcp", 00:27:58.778 "traddr": "10.0.0.2", 00:27:58.778 "adrfam": "ipv4", 00:27:58.778 "trsvcid": "4420", 00:27:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.778 "hdgst": false, 00:27:58.778 "ddgst": false 00:27:58.778 }, 00:27:58.778 "method": "bdev_nvme_attach_controller" 00:27:58.778 }' 00:27:58.778 [2024-12-06 19:26:09.181078] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:58.778 [2024-12-06 19:26:09.181170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227910 ] 00:27:58.778 [2024-12-06 19:26:09.251244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.778 [2024-12-06 19:26:09.309935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.344 Running I/O for 15 seconds... 00:28:01.213 8176.00 IOPS, 31.94 MiB/s [2024-12-06T18:26:12.363Z] 8274.50 IOPS, 32.32 MiB/s [2024-12-06T18:26:12.363Z] 19:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1227681 00:28:01.786 19:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:01.786 [2024-12-06 19:26:12.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.145965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.146802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.146967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.147012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.786 [2024-12-06 19:26:12.147026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.147056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.786 [2024-12-06 19:26:12.147073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.786 [2024-12-06 19:26:12.147089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.147982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.147995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.787 [2024-12-06 19:26:12.148225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.787 [2024-12-06 19:26:12.148252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.787 [2024-12-06 19:26:12.148265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.148764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.148978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.148991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.788 [2024-12-06 19:26:12.149334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.149360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.149386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.788 [2024-12-06 19:26:12.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.788 [2024-12-06 19:26:12.149413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.789 [2024-12-06 19:26:12.149568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.789 [2024-12-06 19:26:12.149797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.149811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb3a0 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.149829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:01.789 [2024-12-06 19:26:12.149840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:01.789 [2024-12-06 19:26:12.149852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35856 len:8 PRP1 0x0 PRP2 0x0 00:28:01.789 [2024-12-06 19:26:12.149865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.789 [2024-12-06 19:26:12.153277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.153355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.154001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.154030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.154045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.154285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.154488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.154508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.154524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.154540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.166767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.167174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.167227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.167465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.167691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.167726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.167740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.167754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.179806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.180148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.180176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.180191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.180409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.180620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.180639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.180651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.180662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.192914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.193287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.193315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.193330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.193568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.193814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.193835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.193849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.193861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.205998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.206365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.206393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.206409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.206652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.206878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.206897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.206910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.206922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.219219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.789 [2024-12-06 19:26:12.219589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-12-06 19:26:12.219616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.789 [2024-12-06 19:26:12.219632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.789 [2024-12-06 19:26:12.219888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.789 [2024-12-06 19:26:12.220119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.789 [2024-12-06 19:26:12.220137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.789 [2024-12-06 19:26:12.220149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.789 [2024-12-06 19:26:12.220161] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.789 [2024-12-06 19:26:12.232309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.232681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.232710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.232725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.232962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.233157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.233175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.233187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.233199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.245516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.245887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.245914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.245929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.246166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.246361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.246379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.246396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.246408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.258521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.258896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.258939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.259177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.259371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.259389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.259401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.259413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.271573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.271985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.272027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.272043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.272279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.272473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.272491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.272503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.272514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.284702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.285070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.285097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.285113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.285349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.285544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.285562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.285574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.285586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.297885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.298224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.298251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.298265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.298484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.298716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.298751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.298765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.298777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.311030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.311396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.311423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.311438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.311685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.311906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.311926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.311939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.311952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.324069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.324471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.324498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.790 [2024-12-06 19:26:12.324514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.790 [2024-12-06 19:26:12.324765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.790 [2024-12-06 19:26:12.324973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.790 [2024-12-06 19:26:12.324992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.790 [2024-12-06 19:26:12.325019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.790 [2024-12-06 19:26:12.325031] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.790 [2024-12-06 19:26:12.337156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.790 [2024-12-06 19:26:12.337522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-12-06 19:26:12.337554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.791 [2024-12-06 19:26:12.337570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.791 [2024-12-06 19:26:12.337826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.791 [2024-12-06 19:26:12.338074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.791 [2024-12-06 19:26:12.338093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.791 [2024-12-06 19:26:12.338105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.791 [2024-12-06 19:26:12.338116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.791 [2024-12-06 19:26:12.350225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.791 [2024-12-06 19:26:12.350594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-12-06 19:26:12.350621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:01.791 [2024-12-06 19:26:12.350636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:01.791 [2024-12-06 19:26:12.350890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:01.791 [2024-12-06 19:26:12.351106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.791 [2024-12-06 19:26:12.351124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.791 [2024-12-06 19:26:12.351136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.791 [2024-12-06 19:26:12.351147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.050 [2024-12-06 19:26:12.363499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.050 [2024-12-06 19:26:12.363908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.050 [2024-12-06 19:26:12.363935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.050 [2024-12-06 19:26:12.363951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.050 [2024-12-06 19:26:12.364175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.050 [2024-12-06 19:26:12.364393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.050 [2024-12-06 19:26:12.364412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.050 [2024-12-06 19:26:12.364424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.050 [2024-12-06 19:26:12.364436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.050 [2024-12-06 19:26:12.376922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.050 [2024-12-06 19:26:12.377317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.050 [2024-12-06 19:26:12.377345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.050 [2024-12-06 19:26:12.377360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.050 [2024-12-06 19:26:12.377604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.050 [2024-12-06 19:26:12.377857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.050 [2024-12-06 19:26:12.377878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.050 [2024-12-06 19:26:12.377892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.050 [2024-12-06 19:26:12.377905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.050 [2024-12-06 19:26:12.390118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.050 [2024-12-06 19:26:12.390456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.050 [2024-12-06 19:26:12.390524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.050 [2024-12-06 19:26:12.390540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.050 [2024-12-06 19:26:12.390834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.050 [2024-12-06 19:26:12.391072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.050 [2024-12-06 19:26:12.391090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.050 [2024-12-06 19:26:12.391102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.050 [2024-12-06 19:26:12.391114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.050 [2024-12-06 19:26:12.403203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.050 [2024-12-06 19:26:12.403572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.050 [2024-12-06 19:26:12.403601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.050 [2024-12-06 19:26:12.403616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.050 [2024-12-06 19:26:12.403844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.050 [2024-12-06 19:26:12.404085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.050 [2024-12-06 19:26:12.404104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.050 [2024-12-06 19:26:12.404117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.050 [2024-12-06 19:26:12.404130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.050 [2024-12-06 19:26:12.416929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.050 [2024-12-06 19:26:12.417323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.050 [2024-12-06 19:26:12.417351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.050 [2024-12-06 19:26:12.417367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.050 [2024-12-06 19:26:12.417584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.417835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.417856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.417890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.417904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.430197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.430618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.430633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.430901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.431133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.431152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.431164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.431175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.443300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.443672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.443699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.443715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.443924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.444150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.444168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.444180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.444192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.456474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.456872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.456888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.457125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.457319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.457337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.457349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.457361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.469710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.470119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.470146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.470161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.470398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.470608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.470626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.470638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.470674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.482884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.483204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.483229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.483243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.483440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.483650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.483691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.483705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.483717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.495998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.496333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.496358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.496373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.496591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.496829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.496849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.496862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.496874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.509221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.509552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.509592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.509612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.509877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.510111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.510129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.510141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.510154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.522239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.522604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.522631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.522646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.522902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.523139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.523157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.523169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.523180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.535435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.535767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.535793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.535809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.536012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.051 [2024-12-06 19:26:12.536229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.051 [2024-12-06 19:26:12.536248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.051 [2024-12-06 19:26:12.536261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.051 [2024-12-06 19:26:12.536273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.051 [2024-12-06 19:26:12.548632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.051 [2024-12-06 19:26:12.549036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.051 [2024-12-06 19:26:12.549065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.051 [2024-12-06 19:26:12.549081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.051 [2024-12-06 19:26:12.549325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.549531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.549550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.549563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.549576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.052 [2024-12-06 19:26:12.562401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.052 [2024-12-06 19:26:12.562810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.052 [2024-12-06 19:26:12.562839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.052 [2024-12-06 19:26:12.562855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.052 [2024-12-06 19:26:12.563097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.563292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.563310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.563322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.563334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.052 [2024-12-06 19:26:12.575976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.052 [2024-12-06 19:26:12.576299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.052 [2024-12-06 19:26:12.576327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.052 [2024-12-06 19:26:12.576342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.052 [2024-12-06 19:26:12.576559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.576819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.576841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.576855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.576867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.052 [2024-12-06 19:26:12.589595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.052 [2024-12-06 19:26:12.589975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.052 [2024-12-06 19:26:12.590003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.052 [2024-12-06 19:26:12.590019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.052 [2024-12-06 19:26:12.590250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.590466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.590484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.590501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.590514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.052 [2024-12-06 19:26:12.602913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.052 [2024-12-06 19:26:12.603290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.052 [2024-12-06 19:26:12.603317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.052 [2024-12-06 19:26:12.603334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.052 [2024-12-06 19:26:12.603572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.603816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.603838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.603851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.603864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.052 [2024-12-06 19:26:12.616244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.052 [2024-12-06 19:26:12.616611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.052 [2024-12-06 19:26:12.616636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.052 [2024-12-06 19:26:12.616676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.052 [2024-12-06 19:26:12.616934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.052 [2024-12-06 19:26:12.617161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.052 [2024-12-06 19:26:12.617179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.052 [2024-12-06 19:26:12.617191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.052 [2024-12-06 19:26:12.617203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.629850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.630179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.630217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.630251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.630470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.630746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.630769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.630782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.630795] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 6945.33 IOPS, 27.13 MiB/s [2024-12-06T18:26:12.890Z] [2024-12-06 19:26:12.643084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.643453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.643479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.643495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.643733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.643935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.643953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.643965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.643978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.656273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.656648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.656732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.656749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.656981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.657219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.657252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.657266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.657279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.669581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.669978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.670021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.670271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.670481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.670500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.670512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.670523] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.682936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.683282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.683375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.683391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.683621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.683849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.683869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.683882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.683895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.696185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.696599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.313 [2024-12-06 19:26:12.696649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.313 [2024-12-06 19:26:12.696672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.313 [2024-12-06 19:26:12.696933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.313 [2024-12-06 19:26:12.697161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.313 [2024-12-06 19:26:12.697179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.313 [2024-12-06 19:26:12.697191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.313 [2024-12-06 19:26:12.697203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.313 [2024-12-06 19:26:12.709250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.313 [2024-12-06 19:26:12.709586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.709653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.709677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.709882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.710095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.710113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.710125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.710136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.722335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.722676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.722703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.722718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.722961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.723188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.723207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.723219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.723230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.735346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.735713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.735741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.735756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.735994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.736188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.736206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.736219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.736230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.748569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.748931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.748958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.748973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.749205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.749415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.749433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.749445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.749457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.761625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.762001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.762029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.762044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.762281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.762476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.762494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.762511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.762524] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.774809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.775162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.775188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.775202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.775420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.775631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.775649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.775661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.775699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.787966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.788354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.788381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.788396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.788633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.788863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.788884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.788897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.788909] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.801216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.801550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.801577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.801592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.801846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.802084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.802103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.802115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.802127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.814419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.814826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.814856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.814872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.815117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.815317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.815336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.815349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.815361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.827690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.828087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.314 [2024-12-06 19:26:12.828114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.314 [2024-12-06 19:26:12.828129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.314 [2024-12-06 19:26:12.828338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.314 [2024-12-06 19:26:12.828554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.314 [2024-12-06 19:26:12.828573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.314 [2024-12-06 19:26:12.828585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.314 [2024-12-06 19:26:12.828597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.314 [2024-12-06 19:26:12.840967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.314 [2024-12-06 19:26:12.841340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.315 [2024-12-06 19:26:12.841369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.315 [2024-12-06 19:26:12.841384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.315 [2024-12-06 19:26:12.841627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.315 [2024-12-06 19:26:12.841875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.315 [2024-12-06 19:26:12.841896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.315 [2024-12-06 19:26:12.841918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.315 [2024-12-06 19:26:12.841932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.315 [2024-12-06 19:26:12.854255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.315 [2024-12-06 19:26:12.854627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.315 [2024-12-06 19:26:12.854660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.315 [2024-12-06 19:26:12.854701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.315 [2024-12-06 19:26:12.854953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.315 [2024-12-06 19:26:12.855167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.315 [2024-12-06 19:26:12.855186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.315 [2024-12-06 19:26:12.855199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.315 [2024-12-06 19:26:12.855211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.315 [2024-12-06 19:26:12.867464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.315 [2024-12-06 19:26:12.867875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.315 [2024-12-06 19:26:12.867903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.315 [2024-12-06 19:26:12.867918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.315 [2024-12-06 19:26:12.868155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.315 [2024-12-06 19:26:12.868366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.315 [2024-12-06 19:26:12.868385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.315 [2024-12-06 19:26:12.868397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.315 [2024-12-06 19:26:12.868409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.315 [2024-12-06 19:26:12.880755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.315 [2024-12-06 19:26:12.881147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.315 [2024-12-06 19:26:12.881174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.315 [2024-12-06 19:26:12.881189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.315 [2024-12-06 19:26:12.881426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.315 [2024-12-06 19:26:12.881621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.315 [2024-12-06 19:26:12.881654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.315 [2024-12-06 19:26:12.881677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.315 [2024-12-06 19:26:12.881690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.894193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.894659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.894715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.894732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.894984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.895199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.895218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.895231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.895242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.907271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.907742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.907771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.907787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.908004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.908266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.908287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.908301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.908313] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.920603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.920992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.921060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.921077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.921313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.921514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.921532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.921545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.921557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.933888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.934296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.934323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.934339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.934577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.934806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.934827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.934845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.934858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.947074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.947407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.947448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.947651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.947897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.947917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.947930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.947942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.960144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.960480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.960506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.960521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.960753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.960990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.961008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.961020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.961032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.973221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.973590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.973617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.973632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.973885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.974115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.974133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.974145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.974157] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.576 [2024-12-06 19:26:12.986397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.576 [2024-12-06 19:26:12.986714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.576 [2024-12-06 19:26:12.986740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.576 [2024-12-06 19:26:12.986754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.576 [2024-12-06 19:26:12.986951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.576 [2024-12-06 19:26:12.987162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.576 [2024-12-06 19:26:12.987181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.576 [2024-12-06 19:26:12.987193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.576 [2024-12-06 19:26:12.987204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:12.999530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:12.999931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:12.999958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:12.999972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.000175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.000401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.000420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.000432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.000444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.012696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.013062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.013088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.013103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.013320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.013530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.013548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.013560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.013572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.025918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.026286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.026318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.026334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.026571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.026797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.026817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.026830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.026842] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.038981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.039348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.039375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.039390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.039627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.039856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.039877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.039889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.039901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.052035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.052402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.052429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.052444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.052661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.052904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.052924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.052937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.052949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.065194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.065511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.065537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.065552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.065804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.066024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.066042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.066054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.066066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.078364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.078730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.078772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.078975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.079184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.079202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.079215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.079226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.091493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.091834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.091860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.091874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.092092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.092303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.092321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.092333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.092344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.104617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.104956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.104983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.104999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.105217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.105427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.105445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.105463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.577 [2024-12-06 19:26:13.105475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.577 [2024-12-06 19:26:13.117935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.577 [2024-12-06 19:26:13.118271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.577 [2024-12-06 19:26:13.118297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.577 [2024-12-06 19:26:13.118312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.577 [2024-12-06 19:26:13.118529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.577 [2024-12-06 19:26:13.118775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.577 [2024-12-06 19:26:13.118796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.577 [2024-12-06 19:26:13.118809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.578 [2024-12-06 19:26:13.118821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.578 [2024-12-06 19:26:13.131096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.578 [2024-12-06 19:26:13.131427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.578 [2024-12-06 19:26:13.131454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.578 [2024-12-06 19:26:13.131470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.578 [2024-12-06 19:26:13.131704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.578 [2024-12-06 19:26:13.131921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.578 [2024-12-06 19:26:13.131940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.578 [2024-12-06 19:26:13.131953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.578 [2024-12-06 19:26:13.131965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.578 [2024-12-06 19:26:13.144346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.578 [2024-12-06 19:26:13.144715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.578 [2024-12-06 19:26:13.144742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.578 [2024-12-06 19:26:13.144758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.578 [2024-12-06 19:26:13.144994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.578 [2024-12-06 19:26:13.145189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.578 [2024-12-06 19:26:13.145207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.578 [2024-12-06 19:26:13.145219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.578 [2024-12-06 19:26:13.145231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.158044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.158417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.158460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.158730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.158966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.159002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.159015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.159028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.171428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.171759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.171787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.171802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.172025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.172235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.172253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.172265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.172276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.184735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.185094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.185120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.185135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.185338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.185565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.185583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.185595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.185607] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.197858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.198189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.198219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.198235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.198451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.198690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.198734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.198749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.198762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.210925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.211307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.211334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.211349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.211587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.211829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.211849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.211862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.211874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.224110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.224480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.224507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.224522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.224770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.224972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.224990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.225002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.225014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.237234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.237603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.237630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.839 [2024-12-06 19:26:13.237646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.839 [2024-12-06 19:26:13.237904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.839 [2024-12-06 19:26:13.238123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.839 [2024-12-06 19:26:13.238141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.839 [2024-12-06 19:26:13.238153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.839 [2024-12-06 19:26:13.238165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.839 [2024-12-06 19:26:13.250387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.839 [2024-12-06 19:26:13.250752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.839 [2024-12-06 19:26:13.250779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.250794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.251032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.251227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.251244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.251257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.251269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.263512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.263883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.263911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.263926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.264164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.264359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.264377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.264389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.264401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.276596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.276969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.276998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.277196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.277405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.277423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.277440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.277452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.289766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.290199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.290227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.290243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.290484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.290721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.290741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.290754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.290766] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.302954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.303342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.303368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.303383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.303600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.303859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.303881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.303894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.303906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.316136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.316506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.316532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.316547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.316799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.317034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.317052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.317064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.317076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.329181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.329555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.329582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.329598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.329866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.330096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.330114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.330126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.330138] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.342238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.342633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.342648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.342888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.343110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.343128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.343140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.343152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.355303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.355674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.355701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.355717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.355954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.356149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.356166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.356178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.356190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.368430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.368767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.368798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.840 [2024-12-06 19:26:13.368813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.840 [2024-12-06 19:26:13.369031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.840 [2024-12-06 19:26:13.369241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.840 [2024-12-06 19:26:13.369259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.840 [2024-12-06 19:26:13.369271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.840 [2024-12-06 19:26:13.369282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.840 [2024-12-06 19:26:13.381692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.840 [2024-12-06 19:26:13.382119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.840 [2024-12-06 19:26:13.382145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.841 [2024-12-06 19:26:13.382161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.841 [2024-12-06 19:26:13.382397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.841 [2024-12-06 19:26:13.382607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.841 [2024-12-06 19:26:13.382626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.841 [2024-12-06 19:26:13.382638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.841 [2024-12-06 19:26:13.382675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.841 [2024-12-06 19:26:13.394879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.841 [2024-12-06 19:26:13.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.841 [2024-12-06 19:26:13.395286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.841 [2024-12-06 19:26:13.395302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.841 [2024-12-06 19:26:13.395504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.841 [2024-12-06 19:26:13.395757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.841 [2024-12-06 19:26:13.395776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.841 [2024-12-06 19:26:13.395788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.841 [2024-12-06 19:26:13.395800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.841 [2024-12-06 19:26:13.408079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.841 [2024-12-06 19:26:13.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.841 [2024-12-06 19:26:13.408492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:02.841 [2024-12-06 19:26:13.408507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:02.841 [2024-12-06 19:26:13.408769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:02.841 [2024-12-06 19:26:13.408991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.841 [2024-12-06 19:26:13.409011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.841 [2024-12-06 19:26:13.409025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.841 [2024-12-06 19:26:13.409038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.101 [2024-12-06 19:26:13.421766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.101 [2024-12-06 19:26:13.422192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.101 [2024-12-06 19:26:13.422220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.101 [2024-12-06 19:26:13.422236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.101 [2024-12-06 19:26:13.422479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.101 [2024-12-06 19:26:13.422707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.101 [2024-12-06 19:26:13.422728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.101 [2024-12-06 19:26:13.422741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.101 [2024-12-06 19:26:13.422754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.101 [2024-12-06 19:26:13.434993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.101 [2024-12-06 19:26:13.435359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.101 [2024-12-06 19:26:13.435386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.101 [2024-12-06 19:26:13.435401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.101 [2024-12-06 19:26:13.435639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.101 [2024-12-06 19:26:13.435867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.101 [2024-12-06 19:26:13.435887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.101 [2024-12-06 19:26:13.435901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.101 [2024-12-06 19:26:13.435913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.101 [2024-12-06 19:26:13.448121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.101 [2024-12-06 19:26:13.448485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.101 [2024-12-06 19:26:13.448511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.101 [2024-12-06 19:26:13.448525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.101 [2024-12-06 19:26:13.448739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.101 [2024-12-06 19:26:13.448956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.101 [2024-12-06 19:26:13.448989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.101 [2024-12-06 19:26:13.449008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.101 [2024-12-06 19:26:13.449020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.101 [2024-12-06 19:26:13.461227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.461621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.461636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.461871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.462100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.462118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.462130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.462141] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.474372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.474741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.474770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.474786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.475029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.475224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.475242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.475254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.475265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.487501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.487828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.487855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.487870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.488072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.488299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.488318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.488330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.488341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.500696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.501029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.501055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.501070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.501287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.501497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.501515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.501528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.501539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.513833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.514208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.514235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.514251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.514488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.514708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.514728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.514740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.514752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.527079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.527446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.527473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.527488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.527737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.527938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.527956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.527968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.527980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.540300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.540670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.540718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.540735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.540978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.541188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.541206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.541218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.541229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.553495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.553871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.553898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.553913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.554151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.554346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.554364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.554376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.554388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.566634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.567000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.567025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.567040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.567258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.567468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.567486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.567498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.567510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.102 [2024-12-06 19:26:13.579802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.102 [2024-12-06 19:26:13.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.102 [2024-12-06 19:26:13.580156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.102 [2024-12-06 19:26:13.580172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.102 [2024-12-06 19:26:13.580393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.102 [2024-12-06 19:26:13.580603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.102 [2024-12-06 19:26:13.580621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.102 [2024-12-06 19:26:13.580633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.102 [2024-12-06 19:26:13.580645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.593046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.593413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.593440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.593455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.593703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.593918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.593937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.593949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.593960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.606184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.606518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.606544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.606559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.606805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.607035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.607053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.607065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.607077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.619401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.619744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.619770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.619785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.620003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.620213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.620232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.620249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.620260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.632537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.632876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.632917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.633135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.633345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.633364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.633376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.633388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 5209.00 IOPS, 20.35 MiB/s [2024-12-06T18:26:13.680Z] [2024-12-06 19:26:13.645651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.646028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.646056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.646072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.646308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.646503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.646520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.646533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.646544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.658899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.659306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.659334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.659350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.659582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.659852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.659874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.659888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.659901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.103 [2024-12-06 19:26:13.672529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.103 [2024-12-06 19:26:13.672952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.103 [2024-12-06 19:26:13.672980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.103 [2024-12-06 19:26:13.672996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.103 [2024-12-06 19:26:13.673248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.103 [2024-12-06 19:26:13.673443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.103 [2024-12-06 19:26:13.673461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.103 [2024-12-06 19:26:13.673473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.103 [2024-12-06 19:26:13.673484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.686299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.686786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.686802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.687018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.687228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.687247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.687259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.687271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.699508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.699859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.699901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.700136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.700347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.700365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.700377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.700389] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.713064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.713470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.713503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.713520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.713747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.713984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.714003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.714031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.714044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.726728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.727063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.727091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.727107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.727338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.727561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.727580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.727593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.727605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.740368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.740741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.740770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.740788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.741018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.741242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.741262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.741275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.741287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.753721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.754168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.754216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.754231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.364 [2024-12-06 19:26:13.754465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.364 [2024-12-06 19:26:13.754686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.364 [2024-12-06 19:26:13.754707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.364 [2024-12-06 19:26:13.754721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.364 [2024-12-06 19:26:13.754734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.364 [2024-12-06 19:26:13.767270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.364 [2024-12-06 19:26:13.767653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-12-06 19:26:13.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.364 [2024-12-06 19:26:13.767705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.767921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.768190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.768209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.768222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.768235] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.780784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.781231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.781277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.781293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.781530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.781765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.781786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.781800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.781814] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.794205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.794537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.794564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.794579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.794832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.795072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.795095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.795109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.795121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.807556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.807939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.808008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.808246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.808447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.808466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.808478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.808490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.820866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.821282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.821309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.821324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.821548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.821795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.821816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.821829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.821842] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.834248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.834674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.834692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.834924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.835161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.835180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.835192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.835205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.847547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.847924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.847953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.847969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.848201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.848417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.848435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.848448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.848459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.860808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.861140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.861166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.861181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.861384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.861599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.861618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.861630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.861656] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.874200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.874546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.874588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.874858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.875078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.875097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.875109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.875121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.887512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.887887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-12-06 19:26:13.887920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.365 [2024-12-06 19:26:13.887936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.365 [2024-12-06 19:26:13.888178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.365 [2024-12-06 19:26:13.888394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.365 [2024-12-06 19:26:13.888413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.365 [2024-12-06 19:26:13.888426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.365 [2024-12-06 19:26:13.888437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.365 [2024-12-06 19:26:13.900862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.365 [2024-12-06 19:26:13.901192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-12-06 19:26:13.901218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.366 [2024-12-06 19:26:13.901233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.366 [2024-12-06 19:26:13.901436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.366 [2024-12-06 19:26:13.901676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.366 [2024-12-06 19:26:13.901697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.366 [2024-12-06 19:26:13.901710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.366 [2024-12-06 19:26:13.901723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.366 [2024-12-06 19:26:13.914256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.366 [2024-12-06 19:26:13.914621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-12-06 19:26:13.914648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.366 [2024-12-06 19:26:13.914672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.366 [2024-12-06 19:26:13.914891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.366 [2024-12-06 19:26:13.915122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.366 [2024-12-06 19:26:13.915142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.366 [2024-12-06 19:26:13.915155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.366 [2024-12-06 19:26:13.915168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.366 [2024-12-06 19:26:13.927905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.366 [2024-12-06 19:26:13.928296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-12-06 19:26:13.928325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.366 [2024-12-06 19:26:13.928340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.366 [2024-12-06 19:26:13.928577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.366 [2024-12-06 19:26:13.928834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.366 [2024-12-06 19:26:13.928857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.366 [2024-12-06 19:26:13.928871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.366 [2024-12-06 19:26:13.928884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.626 [2024-12-06 19:26:13.941606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.626 [2024-12-06 19:26:13.941950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.626 [2024-12-06 19:26:13.941976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:13.942006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:13.942209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:13.942425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:13.942444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:13.942457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:13.942469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:13.954930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:13.955275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:13.955302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:13.955317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:13.955541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:13.955783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:13.955803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:13.955817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:13.955829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:13.968202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:13.968577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:13.968605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:13.968621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:13.968861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:13.969095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:13.969114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:13.969130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:13.969143] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:13.981686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:13.982136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:13.982165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:13.982181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:13.982412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:13.982635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:13.982679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:13.982694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:13.982707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:13.995114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:13.995437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:13.995464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:13.995479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:13.995731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:13.995945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:13.995980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:13.995994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:13.996006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.008438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:14.008821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:14.008850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:14.008865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:14.009112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:14.009329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:14.009348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:14.009361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:14.009372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.021733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:14.022096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:14.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:14.022138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:14.022361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:14.022579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:14.022598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:14.022611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:14.022623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.035014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:14.035332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:14.035359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:14.035374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:14.035578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:14.035823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:14.035844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:14.035857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:14.035870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.048381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:14.048731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:14.048759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:14.048775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:14.049008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:14.049229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:14.049248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:14.049260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:14.049272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.061818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.627 [2024-12-06 19:26:14.062190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.627 [2024-12-06 19:26:14.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.627 [2024-12-06 19:26:14.062239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.627 [2024-12-06 19:26:14.062472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.627 [2024-12-06 19:26:14.062725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.627 [2024-12-06 19:26:14.062746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.627 [2024-12-06 19:26:14.062759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.627 [2024-12-06 19:26:14.062772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.627 [2024-12-06 19:26:14.075073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.075449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.075477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.075492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.075735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.075972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.075991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.076004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.076016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.088417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.088820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.088848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.088864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.089108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.089309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.089327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.089340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.089352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.101741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.102150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.102177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.102193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.102439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.102639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.102681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.102695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.102708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.115066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.115400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.115427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.115442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.115676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.115884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.115903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.115916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.115928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.128385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.128755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.128784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.128800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.129045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.129245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.129265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.129277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.129289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.141709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.142163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.142192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.142208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.142452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.142652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.142695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.142723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.142737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.155096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.155470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.155497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.155513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.155755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.155992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.156011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.156023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.156035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.168362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.168753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.168780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.168796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.169030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.169290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.169311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.169324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.169337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.181750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.182189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.182217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.182233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.182476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.182748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.182770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.182784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.628 [2024-12-06 19:26:14.182797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.628 [2024-12-06 19:26:14.195151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.628 [2024-12-06 19:26:14.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.628 [2024-12-06 19:26:14.195525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.628 [2024-12-06 19:26:14.195540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.628 [2024-12-06 19:26:14.195781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.628 [2024-12-06 19:26:14.196039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.628 [2024-12-06 19:26:14.196058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.628 [2024-12-06 19:26:14.196071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.629 [2024-12-06 19:26:14.196083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.889 [2024-12-06 19:26:14.208832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.889 [2024-12-06 19:26:14.209244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.889 [2024-12-06 19:26:14.209271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.889 [2024-12-06 19:26:14.209286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.889 [2024-12-06 19:26:14.209495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.889 [2024-12-06 19:26:14.209773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.889 [2024-12-06 19:26:14.209794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.889 [2024-12-06 19:26:14.209807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.889 [2024-12-06 19:26:14.209820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.889 [2024-12-06 19:26:14.222312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.889 [2024-12-06 19:26:14.222643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.889 [2024-12-06 19:26:14.222692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.889 [2024-12-06 19:26:14.222710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.889 [2024-12-06 19:26:14.222941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.889 [2024-12-06 19:26:14.223160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.889 [2024-12-06 19:26:14.223179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.889 [2024-12-06 19:26:14.223191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.889 [2024-12-06 19:26:14.223203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.889 [2024-12-06 19:26:14.235698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.889 [2024-12-06 19:26:14.236059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.889 [2024-12-06 19:26:14.236090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.889 [2024-12-06 19:26:14.236106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.889 [2024-12-06 19:26:14.236330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.889 [2024-12-06 19:26:14.236547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.889 [2024-12-06 19:26:14.236565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.889 [2024-12-06 19:26:14.236577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.889 [2024-12-06 19:26:14.236589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.889 [2024-12-06 19:26:14.249026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.889 [2024-12-06 19:26:14.249352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.249379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.249394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.249596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.249841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.249862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.249875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.249887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.262398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.262823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.262851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.262867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.263112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.263313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.263332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.263345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.263357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.275744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.276159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.276187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.276203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.276454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.276678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.276698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.276712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.276724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.289115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.289495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.289522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.289538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.289781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.290008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.290027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.290039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.290051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.302424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.302792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.302820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.302835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.303079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.303295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.303313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.303326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.303338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.315670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.316088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.316115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.316131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.316373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.316591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.316610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.316627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.316639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.328872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.329197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.329222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.329237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.329440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.329681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.329701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.329714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.329726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.342126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.342453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.342480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.342495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.342749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.342988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.343007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.343019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.343032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.355427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.355823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.355851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.355867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.356111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.356311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.356329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.356342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.356354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.368706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.890 [2024-12-06 19:26:14.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.890 [2024-12-06 19:26:14.369136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.890 [2024-12-06 19:26:14.369151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.890 [2024-12-06 19:26:14.369376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.890 [2024-12-06 19:26:14.369592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.890 [2024-12-06 19:26:14.369611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.890 [2024-12-06 19:26:14.369623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.890 [2024-12-06 19:26:14.369635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.890 [2024-12-06 19:26:14.382058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.382464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.382480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.382730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.382959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.382979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.382992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.383019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.395433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.395770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.395797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.395812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.396036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.396253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.396271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.396284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.396296] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.408685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.409019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.409051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.409068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.409278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.409494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.409513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.409525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.409537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.422069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.422407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.422434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.422449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.422685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.422906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.422926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.422939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.422952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.435521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.435922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.435950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.435966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.436197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.436420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.436439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.436452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.436464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.448911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.449298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.449326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.449342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.449590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.449832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.449854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.449868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.449881] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.891 [2024-12-06 19:26:14.462613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.891 [2024-12-06 19:26:14.463083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.891 [2024-12-06 19:26:14.463110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:03.891 [2024-12-06 19:26:14.463125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:03.891 [2024-12-06 19:26:14.463364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:03.891 [2024-12-06 19:26:14.463596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.891 [2024-12-06 19:26:14.463617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.891 [2024-12-06 19:26:14.463631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.891 [2024-12-06 19:26:14.463644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.476003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.476358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.476374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.476591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.476823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.476843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.476855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.476868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.489304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.489686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.489715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.489730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.489961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.490178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.490197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.490214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.490227] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.502556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.502939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.502968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.502984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.503223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.503440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.503458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.503471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.503482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.515832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.516194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.516221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.516236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.516460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.516703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.516723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.516736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.516748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.529133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.529495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.529510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.529765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.530000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.530019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.530032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.530043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.542384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.542804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.542832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.542848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.543080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.543297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.543316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.543329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.543341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.555714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.556123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.556151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.556167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.556412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.152 [2024-12-06 19:26:14.556612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.152 [2024-12-06 19:26:14.556630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.152 [2024-12-06 19:26:14.556657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.152 [2024-12-06 19:26:14.556680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.152 [2024-12-06 19:26:14.569061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.152 [2024-12-06 19:26:14.569394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.152 [2024-12-06 19:26:14.569421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.152 [2024-12-06 19:26:14.569436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.152 [2024-12-06 19:26:14.569659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.569877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.569896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.569909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.569921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.582427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.582829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.582861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.582878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.583123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.583323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.583341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.583354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.583366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.595800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.596151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.596178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.596193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.596403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.596635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.596654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.596688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.596704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.609057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.609433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.609460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.609477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.609732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.609938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.609957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.609970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.609997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.622316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.622696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.622724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.622740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.622975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.623192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.623211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.623223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.623235] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.635572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.635949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.635977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.635993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.636225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.636447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.636467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.636479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.636491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 4167.20 IOPS, 16.28 MiB/s [2024-12-06T18:26:14.730Z] [2024-12-06 19:26:14.648970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.649343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.649371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.649386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.649630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.649865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.649886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.649900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.649912] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.662293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.662643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.662677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.662694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.662904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.663137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.663160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.663174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.663186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.675694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.676146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.676175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.676191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.676423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.676659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.676689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.676703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.676716] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.689151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.689544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.689570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.153 [2024-12-06 19:26:14.689586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.153 [2024-12-06 19:26:14.689839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.153 [2024-12-06 19:26:14.690073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.153 [2024-12-06 19:26:14.690092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.153 [2024-12-06 19:26:14.690104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.153 [2024-12-06 19:26:14.690116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.153 [2024-12-06 19:26:14.702556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.153 [2024-12-06 19:26:14.702901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.153 [2024-12-06 19:26:14.702928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.154 [2024-12-06 19:26:14.702944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.154 [2024-12-06 19:26:14.703168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.154 [2024-12-06 19:26:14.703368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.154 [2024-12-06 19:26:14.703387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.154 [2024-12-06 19:26:14.703400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.154 [2024-12-06 19:26:14.703412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.154 [2024-12-06 19:26:14.715839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.154 [2024-12-06 19:26:14.716224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.154 [2024-12-06 19:26:14.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.154 [2024-12-06 19:26:14.716303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.154 [2024-12-06 19:26:14.716519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.154 [2024-12-06 19:26:14.716772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.154 [2024-12-06 19:26:14.716793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.154 [2024-12-06 19:26:14.716806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.154 [2024-12-06 19:26:14.716818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.729296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.729718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.729747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.729763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.730018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.730213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.730231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.730243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.730255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.742352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.742721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.742748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.742763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.743000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.743195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.743213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.743225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.743237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.755576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.755926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.755961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.755992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.756204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.756398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.756416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.756428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.756439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.768680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.769047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.769073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.769088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.769311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.769520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.769538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.769550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.769562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.781857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.782249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.782276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.782292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.782528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.782751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.782771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.782784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.782796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.795100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.795470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.795496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.795511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.795764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.795995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.796014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.796026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.796037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.808358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.808733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.808761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.808777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.809022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.809216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.809234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.809246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.809258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.821561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.821959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.822012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.822027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.822263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.822457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.822476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.822489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.822503] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.834871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.835227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.835254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.835270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.835488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.835727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.835751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.835765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.835777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.848126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.848495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.848521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.848537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.848803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.849018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.849036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.849048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.849061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.861357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.861724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.861751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.861767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.862004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.862199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.862217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.862229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.411 [2024-12-06 19:26:14.862241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.411 [2024-12-06 19:26:14.874614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.411 [2024-12-06 19:26:14.874980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.411 [2024-12-06 19:26:14.875008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.411 [2024-12-06 19:26:14.875024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.411 [2024-12-06 19:26:14.875257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.411 [2024-12-06 19:26:14.875467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.411 [2024-12-06 19:26:14.875485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.411 [2024-12-06 19:26:14.875497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.875508] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.887858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.888243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.888270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.888285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.888501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.888739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.888760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.888772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.888784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.901047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.901468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.901484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.901732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.901932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.901951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.901964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.901976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.914333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.914675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.914703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.914718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.914943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.915159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.915178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.915191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.915218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.927601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.927992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.928025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.928042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.928274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.928512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.928531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.928543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.928571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.940960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.941327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.941354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.941369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.941586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.941829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.941849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.941862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.941874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.954269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.954593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.954618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.954633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.954901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.955146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.955165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.955177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.955188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.967577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.967982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.968025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.968041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.968283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.968476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.968494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.968507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.968518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.412 [2024-12-06 19:26:14.980750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.412 [2024-12-06 19:26:14.981148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.412 [2024-12-06 19:26:14.981175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.412 [2024-12-06 19:26:14.981191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.412 [2024-12-06 19:26:14.981429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.412 [2024-12-06 19:26:14.981624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.412 [2024-12-06 19:26:14.981641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.412 [2024-12-06 19:26:14.981654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.412 [2024-12-06 19:26:14.981688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:14.994189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:14.994557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:14.994585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:14.994600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:14.994834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:14.995067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:14.995085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:14.995097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:14.995108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.007547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.008011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.008026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.008250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.008467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.008490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.008504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.008516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.020838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.021225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.021252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.021267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.021503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.021726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.021746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.021758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.021770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.033859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.034229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.034256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.034271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.034508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.034730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.034749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.034762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.034774] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.047029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.047395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.047423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.047438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.047685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.047906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.047926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.047938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.047965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.060249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.060662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.060719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.060734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.060984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.061178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.061195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.061207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.061219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.073348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.073693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.073721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.073736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.073955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.074165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.074183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.074195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.074206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.086597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.087009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.087051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.087067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.087285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.087496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.087514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.087527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.087538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.099739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.100180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.100212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.100228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.100462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.100700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.100721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.100735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.100748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.112919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.113303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.113345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.113583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.113807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.113827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.113840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.113852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 [2024-12-06 19:26:15.126310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.126686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.126714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.126730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.126962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.127180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.127200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.127212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.127223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1227681 Killed "${NVMF_APP[@]}" "$@" 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.670 [2024-12-06 19:26:15.139702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.140119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.140148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.140164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.140408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.140608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.670 [2024-12-06 19:26:15.140627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.670 [2024-12-06 19:26:15.140639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.670 [2024-12-06 19:26:15.140677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1228687 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1228687 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1228687 ']' 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.670 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.670 [2024-12-06 19:26:15.153204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.670 [2024-12-06 19:26:15.153546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.670 [2024-12-06 19:26:15.153583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.670 [2024-12-06 19:26:15.153618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.670 [2024-12-06 19:26:15.153859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.670 [2024-12-06 19:26:15.154131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.154151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.154166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.154178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.166734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.167140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.167168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.167190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.167435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.167635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.167678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.167693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.167707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.180263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.180638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.180672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.180690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.180907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.181138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.181158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.181171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.181184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.193782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.194181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.194224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.194469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.194701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.194724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.194738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.194752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.197219] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:04.671 [2024-12-06 19:26:15.197296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.671 [2024-12-06 19:26:15.207199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.207542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.207570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.207590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.207858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.208078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.208098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.208117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.208131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.220543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.220947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.220976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.220992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.221221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.221437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.221456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.221468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.221480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.671 [2024-12-06 19:26:15.233934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.671 [2024-12-06 19:26:15.234322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.671 [2024-12-06 19:26:15.234350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.671 [2024-12-06 19:26:15.234366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.671 [2024-12-06 19:26:15.234601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.671 [2024-12-06 19:26:15.234856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.671 [2024-12-06 19:26:15.234877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.671 [2024-12-06 19:26:15.234891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.671 [2024-12-06 19:26:15.234904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.247708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.248086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.248114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.248130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.248361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.248598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.248618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.248631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.248658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.260964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.261351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.261378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.261394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.261617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.261861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.261881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.261895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.261907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.271469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.929 [2024-12-06 19:26:15.274340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.274693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.274721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.274737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.274953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.275170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.275189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.275202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.275214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.287730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.288244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.288278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.288297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.288533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.288782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.288803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.288829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.288845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.301072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.301425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.301468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.301688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.301910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.301929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.301943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.301972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.314296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.314712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.314729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.314946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.315179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.315198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.315210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.315223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.327599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.327966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.328011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.328229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.328449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.328470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.328485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.328514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.328656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.929 [2024-12-06 19:26:15.328711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.929 [2024-12-06 19:26:15.328725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.929 [2024-12-06 19:26:15.328736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.929 [2024-12-06 19:26:15.328746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.929 [2024-12-06 19:26:15.330067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.929 [2024-12-06 19:26:15.330130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.929 [2024-12-06 19:26:15.330134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.929 [2024-12-06 19:26:15.341145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.341681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.341720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.341740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.341965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.342203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.342224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.342241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.342258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.354790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.355341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.355379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.355400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.355641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.355891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.355914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.355931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.355947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.368390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.368897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.368955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.369197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.369426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.369447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.369464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.369480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.381994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.382465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.382500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.382520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.382755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.382996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.383018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.383034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.383050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.395599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.396158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.396196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.396216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.396457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.396704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.396727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.396745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.396760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.409292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.409822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.409859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.409881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.410123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.410343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.410363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.410395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.929 [2024-12-06 19:26:15.410413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.929 [2024-12-06 19:26:15.422812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.929 [2024-12-06 19:26:15.423192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-12-06 19:26:15.423222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.929 [2024-12-06 19:26:15.423239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.929 [2024-12-06 19:26:15.423457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.929 [2024-12-06 19:26:15.423715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.929 [2024-12-06 19:26:15.423738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.929 [2024-12-06 19:26:15.423752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 [2024-12-06 19:26:15.423766] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.930 [2024-12-06 19:26:15.436519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.930 [2024-12-06 19:26:15.436909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-12-06 19:26:15.436938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.930 [2024-12-06 19:26:15.436954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.930 [2024-12-06 19:26:15.437171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.930 [2024-12-06 19:26:15.437391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.930 [2024-12-06 19:26:15.437412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.930 [2024-12-06 19:26:15.437427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 [2024-12-06 19:26:15.437440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:04.930 [2024-12-06 19:26:15.450228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:04.930 [2024-12-06 19:26:15.450572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-12-06 19:26:15.450601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.930 [2024-12-06 19:26:15.450617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:04.930 [2024-12-06 19:26:15.450842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.930 [2024-12-06 19:26:15.451080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.930 [2024-12-06 19:26:15.451101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.930 [2024-12-06 19:26:15.451120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 [2024-12-06 19:26:15.451133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.930 [2024-12-06 19:26:15.463961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.930 [2024-12-06 19:26:15.464316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-12-06 19:26:15.464344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.930 [2024-12-06 19:26:15.464360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.930 [2024-12-06 19:26:15.464577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.930 [2024-12-06 19:26:15.464840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.930 [2024-12-06 19:26:15.464862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.930 [2024-12-06 19:26:15.464876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 [2024-12-06 19:26:15.464890] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 [2024-12-06 19:26:15.477404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.930 [2024-12-06 19:26:15.477794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-12-06 19:26:15.477822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.930 [2024-12-06 19:26:15.477838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.930 [2024-12-06 19:26:15.478055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.930 [2024-12-06 19:26:15.478284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.930 [2024-12-06 19:26:15.478304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.930 [2024-12-06 19:26:15.478318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 [2024-12-06 19:26:15.478331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.930 [2024-12-06 19:26:15.478955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.930 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 [2024-12-06 19:26:15.491152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.930 [2024-12-06 19:26:15.491504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-12-06 19:26:15.491538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:04.930 [2024-12-06 19:26:15.491555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:04.930 [2024-12-06 19:26:15.491784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:04.930 [2024-12-06 19:26:15.492028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.930 [2024-12-06 19:26:15.492049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.930 [2024-12-06 19:26:15.492063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.930 [2024-12-06 19:26:15.492079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.187 [2024-12-06 19:26:15.505159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.187 [2024-12-06 19:26:15.505501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-12-06 19:26:15.505529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:05.187 [2024-12-06 19:26:15.505546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:05.187 [2024-12-06 19:26:15.505773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:05.187 [2024-12-06 19:26:15.506023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.187 [2024-12-06 19:26:15.506042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.187 [2024-12-06 19:26:15.506055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.187 [2024-12-06 19:26:15.506068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.187 [2024-12-06 19:26:15.518872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.187 [2024-12-06 19:26:15.519255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-12-06 19:26:15.519285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:05.187 [2024-12-06 19:26:15.519302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:05.187 [2024-12-06 19:26:15.519523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:05.187 [2024-12-06 19:26:15.519757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.187 [2024-12-06 19:26:15.519779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.187 [2024-12-06 19:26:15.519795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.187 [2024-12-06 19:26:15.519818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.187 Malloc0 00:28:05.187 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.187 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.187 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.187 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.187 [2024-12-06 19:26:15.532559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.187 [2024-12-06 19:26:15.533001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-12-06 19:26:15.533032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:05.187 [2024-12-06 19:26:15.533050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:05.187 [2024-12-06 19:26:15.533285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:05.188 [2024-12-06 19:26:15.533501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.188 [2024-12-06 19:26:15.533521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.188 [2024-12-06 19:26:15.533536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.188 [2024-12-06 19:26:15.533550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.188 [2024-12-06 19:26:15.546385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.188 [2024-12-06 19:26:15.546749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-12-06 19:26:15.546777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa8660 with addr=10.0.0.2, port=4420 00:28:05.188 [2024-12-06 19:26:15.546793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa8660 is same with the state(6) to be set 00:28:05.188 [2024-12-06 19:26:15.547010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa8660 (9): Bad file descriptor 00:28:05.188 [2024-12-06 19:26:15.547238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.188 [2024-12-06 19:26:15.547258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.188 [2024-12-06 19:26:15.547271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.188 [2024-12-06 19:26:15.547284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.188 [2024-12-06 19:26:15.547493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.188 19:26:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1227910 00:28:05.188 [2024-12-06 19:26:15.560063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.188 [2024-12-06 19:26:15.628646] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:06.117 3479.17 IOPS, 13.59 MiB/s [2024-12-06T18:26:18.061Z] 4235.71 IOPS, 16.55 MiB/s [2024-12-06T18:26:18.989Z] 4788.62 IOPS, 18.71 MiB/s [2024-12-06T18:26:19.918Z] 5210.00 IOPS, 20.35 MiB/s [2024-12-06T18:26:20.847Z] 5543.80 IOPS, 21.66 MiB/s [2024-12-06T18:26:21.780Z] 5835.64 IOPS, 22.80 MiB/s [2024-12-06T18:26:22.714Z] 6080.00 IOPS, 23.75 MiB/s [2024-12-06T18:26:23.749Z] 6277.62 IOPS, 24.52 MiB/s [2024-12-06T18:26:24.684Z] 6447.93 IOPS, 25.19 MiB/s 00:28:14.107 Latency(us) 00:28:14.107 [2024-12-06T18:26:24.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.107 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:14.107 Verification LBA range: start 0x0 length 0x4000 00:28:14.107 Nvme1n1 : 15.01 6597.22 25.77 10123.08 0.00 7632.65 825.27 17961.72 00:28:14.107 [2024-12-06T18:26:24.684Z] =================================================================================================================== 00:28:14.107 [2024-12-06T18:26:24.684Z] Total : 6597.22 25.77 10123.08 0.00 7632.65 825.27 17961.72 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.365 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.365 rmmod nvme_tcp 00:28:14.365 rmmod nvme_fabrics 00:28:14.623 rmmod nvme_keyring 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1228687 ']' 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1228687 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1228687 ']' 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1228687 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.623 19:26:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228687 00:28:14.623 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.623 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.623 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228687' 00:28:14.623 killing process with pid 1228687 00:28:14.623 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1228687 00:28:14.623 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1228687 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.880 19:26:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.783 00:28:16.783 real 0m22.706s 00:28:16.783 user 1m0.804s 00:28:16.783 sys 0m4.337s 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.783 ************************************ 00:28:16.783 END TEST nvmf_bdevperf 00:28:16.783 ************************************ 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.783 19:26:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.042 ************************************ 00:28:17.042 START TEST nvmf_target_disconnect 00:28:17.042 ************************************ 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:17.042 * Looking for test storage... 00:28:17.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.042 --rc genhtml_branch_coverage=1 00:28:17.042 --rc genhtml_function_coverage=1 00:28:17.042 --rc genhtml_legend=1 00:28:17.042 --rc geninfo_all_blocks=1 00:28:17.042 --rc geninfo_unexecuted_blocks=1 00:28:17.042 00:28:17.042 ' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.042 --rc genhtml_branch_coverage=1 00:28:17.042 --rc genhtml_function_coverage=1 00:28:17.042 --rc genhtml_legend=1 00:28:17.042 --rc geninfo_all_blocks=1 00:28:17.042 --rc geninfo_unexecuted_blocks=1 00:28:17.042 00:28:17.042 ' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.042 --rc genhtml_branch_coverage=1 00:28:17.042 --rc genhtml_function_coverage=1 00:28:17.042 --rc genhtml_legend=1 00:28:17.042 --rc geninfo_all_blocks=1 00:28:17.042 --rc geninfo_unexecuted_blocks=1 00:28:17.042 00:28:17.042 ' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.042 --rc genhtml_branch_coverage=1 00:28:17.042 --rc genhtml_function_coverage=1 00:28:17.042 --rc genhtml_legend=1 00:28:17.042 --rc geninfo_all_blocks=1 00:28:17.042 --rc geninfo_unexecuted_blocks=1 00:28:17.042 00:28:17.042 ' 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.042 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.043 19:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.579 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.580 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.580 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:28:19.580 00:28:19.580 --- 10.0.0.2 ping statistics --- 00:28:19.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.580 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:28:19.580 00:28:19.580 --- 10.0.0.1 ping statistics --- 00:28:19.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.580 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:19.580 ************************************ 00:28:19.580 START TEST nvmf_target_disconnect_tc1 00:28:19.580 ************************************ 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:19.580 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.581 [2024-12-06 19:26:29.896710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-12-06 19:26:29.896785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad7f40 with addr=10.0.0.2, port=4420 00:28:19.581 [2024-12-06 19:26:29.896822] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:19.581 [2024-12-06 19:26:29.896847] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:19.581 [2024-12-06 19:26:29.896862] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:19.581 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:19.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:19.581 Initializing NVMe Controllers 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.581 00:28:19.581 real 0m0.096s 00:28:19.581 user 0m0.049s 00:28:19.581 sys 0m0.046s 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.581 ************************************ 00:28:19.581 END TEST nvmf_target_disconnect_tc1 00:28:19.581 ************************************ 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:19.581 ************************************ 00:28:19.581 START TEST nvmf_target_disconnect_tc2 00:28:19.581 ************************************ 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1231865 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1231865 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1231865 ']' 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.581 19:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.581 [2024-12-06 19:26:30.016588] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:19.581 [2024-12-06 19:26:30.016710] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.581 [2024-12-06 19:26:30.094843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.840 [2024-12-06 19:26:30.159165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.840 [2024-12-06 19:26:30.159242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.840 [2024-12-06 19:26:30.159256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.840 [2024-12-06 19:26:30.159267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.840 [2024-12-06 19:26:30.159276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.840 [2024-12-06 19:26:30.161116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:19.840 [2024-12-06 19:26:30.161178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:19.840 [2024-12-06 19:26:30.161201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:19.840 [2024-12-06 19:26:30.161207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 Malloc0 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 [2024-12-06 19:26:30.352142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 [2024-12-06 19:26:30.380395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1231899 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:19.840 19:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:22.411 19:26:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1231865 00:28:22.411 19:26:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Read completed with error (sct=0, sc=8) 00:28:22.411 starting I/O failed 00:28:22.411 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 [2024-12-06 19:26:32.407181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 [2024-12-06 19:26:32.407498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Write completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.412 Read completed with error (sct=0, sc=8) 00:28:22.412 starting I/O failed 00:28:22.413 [2024-12-06 19:26:32.407805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Write completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 Read completed with error (sct=0, sc=8) 00:28:22.413 starting I/O failed 00:28:22.413 [2024-12-06 19:26:32.408109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.413 [2024-12-06 19:26:32.408288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.408331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.408489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.408608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.408633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.408743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.408768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.408865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.408891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.408986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.409872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.413 [2024-12-06 19:26:32.409897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.413 qpair failed and we were unable to recover it. 00:28:22.413 [2024-12-06 19:26:32.410033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.410894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.410935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.411902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.411929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.412878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.412987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.413022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.413103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.413129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.413383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.413474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.413500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.414 [2024-12-06 19:26:32.413575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.414 [2024-12-06 19:26:32.413609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.414 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.413716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.413834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.413859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.413946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.413983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.414840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.414868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.415860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.415965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.416921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.416974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.417081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.417108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.417227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.417253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.417340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.415 [2024-12-06 19:26:32.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.415 qpair failed and we were unable to recover it. 00:28:22.415 [2024-12-06 19:26:32.417459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.417490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.417575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.417602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.417777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.417860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.417886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.417975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.418960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.418987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.419902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.416 [2024-12-06 19:26:32.420877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.416 [2024-12-06 19:26:32.420903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.416 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.420984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.421897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.421978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.422878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.422905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.423893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.423929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.424023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.424051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.417 [2024-12-06 19:26:32.424142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.417 [2024-12-06 19:26:32.424169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.417 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.424293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.424319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.424428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.424454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.424550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.424588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.424695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.424740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.424852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.424892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.425875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.425999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.426865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.426991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.418 [2024-12-06 19:26:32.427019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.418 qpair failed and we were unable to recover it. 00:28:22.418 [2024-12-06 19:26:32.427101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.427966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.427998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.428911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.428937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.429897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.429977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.430031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.430109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.430134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.430216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.430242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.430386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.419 [2024-12-06 19:26:32.430414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.419 qpair failed and we were unable to recover it. 00:28:22.419 [2024-12-06 19:26:32.430528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.430555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.430645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.430677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.430765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.430790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.430885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.430911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.431935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.431962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.432906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.432932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.433942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.433974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.434119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.420 [2024-12-06 19:26:32.434145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.420 qpair failed and we were unable to recover it. 00:28:22.420 [2024-12-06 19:26:32.434233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.434345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.434459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.434612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.434773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.434937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.434971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.435953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.435979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.436192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.436247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.436460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.436489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.436600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.436626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.436729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.436757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.436903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.436942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.437871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.437899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.438004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.438141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.438168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.421 qpair failed and we were unable to recover it. 00:28:22.421 [2024-12-06 19:26:32.438250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.421 [2024-12-06 19:26:32.438275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.438383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.438409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.438542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.438581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.438735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.438764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.438879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.438907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.439925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.439961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.440911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.440950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.441939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.441967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.442152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.442211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.442377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.442427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.442546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.442572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.422 [2024-12-06 19:26:32.442689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.422 [2024-12-06 19:26:32.442726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.422 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.442836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.442863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.443858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.443976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.444956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.444982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.445824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.445983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.446012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.446110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.446137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.446285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.446314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.446524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.446596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.423 [2024-12-06 19:26:32.446752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.423 [2024-12-06 19:26:32.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.423 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.447913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.447939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.448891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.448983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.449920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.449948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.450079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.450105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.450189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.450215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.450328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.450354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.450496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.450521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.424 [2024-12-06 19:26:32.450653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.424 [2024-12-06 19:26:32.450694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.424 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.450784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.450811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.450927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.450953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.451893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.451982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.452886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.452913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.453029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.453055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.453170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.453195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.453284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.453311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.453402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.453427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.425 [2024-12-06 19:26:32.453545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.425 [2024-12-06 19:26:32.453571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.425 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.453691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.453717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.453825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.453851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.453937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.453963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.454910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.454936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.455888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.455914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.456903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.456928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.457035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.457060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.426 qpair failed and we were unable to recover it. 00:28:22.426 [2024-12-06 19:26:32.457152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.426 [2024-12-06 19:26:32.457177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.457323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.457466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.457609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.457872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.457997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.458883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.458909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.459890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.460887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.460991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.427 qpair failed and we were unable to recover it. 00:28:22.427 [2024-12-06 19:26:32.461140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.427 [2024-12-06 19:26:32.461168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.461914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.461940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.462906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.462931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.463860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.463987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.464870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.464895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.428 [2024-12-06 19:26:32.465034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.428 [2024-12-06 19:26:32.465061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.428 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.465922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.465949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.466874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.466899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.467904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.467933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.468865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.429 [2024-12-06 19:26:32.468981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.429 [2024-12-06 19:26:32.469009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.429 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.469871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.469898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.470992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.471891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.471916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.472903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.472991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.473017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.430 [2024-12-06 19:26:32.473128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.430 [2024-12-06 19:26:32.473153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.430 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.473263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.473298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.473374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.473399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.473516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.473542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.473674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.473714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.473874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.473915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.474927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.474953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.475879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.475906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.476046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.476220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.476459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.476577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.431 [2024-12-06 19:26:32.476723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.431 qpair failed and we were unable to recover it. 00:28:22.431 [2024-12-06 19:26:32.476837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.476864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.477936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.477977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.478944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.478970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.479962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.479988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.432 [2024-12-06 19:26:32.480801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.432 qpair failed and we were unable to recover it. 00:28:22.432 [2024-12-06 19:26:32.480885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.480910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.481920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.481944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.482899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.482985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.483877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.483974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.433 [2024-12-06 19:26:32.484941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.433 qpair failed and we were unable to recover it. 00:28:22.433 [2024-12-06 19:26:32.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.485884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.485997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.486926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.486954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.487874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.487902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.488924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.488952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.489030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.489054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.489173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.489287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.489312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.434 qpair failed and we were unable to recover it. 00:28:22.434 [2024-12-06 19:26:32.489439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.434 [2024-12-06 19:26:32.489479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.489622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.489650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.489789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.489828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.489949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.489977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.490869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.490978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.491943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.491971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.492954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.492979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.493884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.493912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.494023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.494050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.494130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.435 [2024-12-06 19:26:32.494155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.435 qpair failed and we were unable to recover it. 00:28:22.435 [2024-12-06 19:26:32.494260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.494285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.494394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.494421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.494559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.494585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.494702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.494728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.494847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.494874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.494974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.495939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.495965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.496875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.496906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.497861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.497890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.436 [2024-12-06 19:26:32.498912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.436 [2024-12-06 19:26:32.498939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.436 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.499919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.499999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.500868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.500895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.501033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.501322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.501527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.501713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.501868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.501988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.502157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.502329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.502683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.502849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.502876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.503014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.503040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.503232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.437 [2024-12-06 19:26:32.503283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.437 qpair failed and we were unable to recover it. 00:28:22.437 [2024-12-06 19:26:32.503400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.503430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.503521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.503668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.503697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.503784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.503811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.503930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.503957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.504871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.504981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.505902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.505930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.506944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.506971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.438 [2024-12-06 19:26:32.507930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-12-06 19:26:32.507957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.438 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.508941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.508968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.509956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.509981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.510942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.510968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.511895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.511922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.512004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.512029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.512146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.512173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.439 [2024-12-06 19:26:32.512309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.439 [2024-12-06 19:26:32.512336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.439 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.512421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.512445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.512542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.512569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.512684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.512711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.512798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.512822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.512905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.512930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.513886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.513911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.514961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.514987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.515967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.515995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.516898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.516922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.440 [2024-12-06 19:26:32.517004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.440 [2024-12-06 19:26:32.517031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.440 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.517924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.517951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.518883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.519915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.519943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.520991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.521404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.521553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.521706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.521848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.521990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.522016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.522129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.522155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.441 qpair failed and we were unable to recover it. 00:28:22.441 [2024-12-06 19:26:32.522263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.441 [2024-12-06 19:26:32.522290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.522407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.522434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.522530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.522571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.522662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.522699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.522816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.522936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.522964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.523925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.523952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.524952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.524979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.525896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.525922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.442 [2024-12-06 19:26:32.526815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.442 [2024-12-06 19:26:32.526843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.442 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.526926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.526952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.527154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.527181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.527394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.527446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.527583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.527609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.527758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.527785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.527894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.527920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.528093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.528363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.528544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.528842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.528960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.529025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.529238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.529292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.529481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.529533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.529654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.529689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.529833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.529860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.529979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.530850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.530877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.531914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.531942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.532923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.443 qpair failed and we were unable to recover it. 00:28:22.443 [2024-12-06 19:26:32.533042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.443 [2024-12-06 19:26:32.533070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.533331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.533655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.533746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.533875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.533903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.534828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.534854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.535889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.535915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.536914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.537938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.537965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.538957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.444 [2024-12-06 19:26:32.538986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.444 qpair failed and we were unable to recover it. 00:28:22.444 [2024-12-06 19:26:32.539105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.539935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.539961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.540819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.541900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.541924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.542964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.543965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.543992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.544080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.544106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.445 [2024-12-06 19:26:32.544241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.445 [2024-12-06 19:26:32.544268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.445 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.544401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.544430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.544585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.544626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.544766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.544794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.544912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.544939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.545915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.545940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.546895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.546923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.547877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.547904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.548886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.548976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.549939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.446 [2024-12-06 19:26:32.549966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.446 qpair failed and we were unable to recover it. 00:28:22.446 [2024-12-06 19:26:32.550081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.550919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.550947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.551845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.551872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.552974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.553919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.553972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.554217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.554282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.554609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.554694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.554805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.554833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.554986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.555129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.555524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.555595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.555808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.555833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.555962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-12-06 19:26:32.556753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.447 qpair failed and we were unable to recover it. 00:28:22.447 [2024-12-06 19:26:32.556870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.556898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.556992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.557955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.557995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.558891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.558932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.559862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.559889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.560907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.560932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.561964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.561991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.562186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.562342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.562371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.562513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.562540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.562679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.562706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.562842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.562870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.563956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.563983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.564152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.564325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.564439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.564578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.448 [2024-12-06 19:26:32.564748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.448 qpair failed and we were unable to recover it. 00:28:22.448 [2024-12-06 19:26:32.564841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.564867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.564950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.564975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.565908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.566867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.566893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.567945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.567972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.568873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.568899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.569948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.569974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.570895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.570922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.571134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.571367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.571507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.571617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.571808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.571961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.572890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.572999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.573026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.449 [2024-12-06 19:26:32.573119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-12-06 19:26:32.573146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.449 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.573971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.574885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.574992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.575913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.575941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.576913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.576940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.577866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.578902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.578928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.579897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.579922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.580034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.580061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.450 qpair failed and we were unable to recover it. 00:28:22.450 [2024-12-06 19:26:32.580175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.450 [2024-12-06 19:26:32.580202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.580315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.580342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.580493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.580559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.580644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.580679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.580768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.580796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.580882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.580907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.581965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.581992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.582877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.582901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.583869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.583986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.584862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.585885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.585914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.586969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.586995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.587934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.587961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.588079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.588105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.588183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.588209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.588305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.451 [2024-12-06 19:26:32.588345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.451 qpair failed and we were unable to recover it. 00:28:22.451 [2024-12-06 19:26:32.588447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.588621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.588648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.588773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.588800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.588923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.589928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.589955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.590940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.590970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.591899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.591926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.592962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.592989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.593928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.593956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.594046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.594073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.594318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.594537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.594565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.594677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.594892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.594937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.595949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.595976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.596237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.596421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.596530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.596629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.596815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.596977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.597005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.452 qpair failed and we were unable to recover it. 00:28:22.452 [2024-12-06 19:26:32.597101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.452 [2024-12-06 19:26:32.597128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.597281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.597331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.597492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.597601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.597627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.597756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.597786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.597923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.597949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.598059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.598111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.598356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.598422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.598625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.598652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.598761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.598808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.598940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.598968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.599193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.599359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.599411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.599522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.599564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.599684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.599723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.599869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.599897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.600883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.600912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.601034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.601061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.601283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.601329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.601510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.601578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.601770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.601798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.601918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.601945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.602060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.602097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.602238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.602301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.602553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.602618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.602785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.602814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.602939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.602965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.603129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.603194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.603532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.603739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.603767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.603878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.603903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.604055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.604194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.604481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.604717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.604863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.604976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.605899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.606048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.453 [2024-12-06 19:26:32.606075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.453 qpair failed and we were unable to recover it. 00:28:22.453 [2024-12-06 19:26:32.606167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.606193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.606438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.606502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.606745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.606773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.606889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.606921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.607858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.608951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.608978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.609898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.609925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.610912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.610952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.611961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.611988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.612874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.612902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.613910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.613936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.614046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.614163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.614277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.614509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.454 [2024-12-06 19:26:32.614647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.454 qpair failed and we were unable to recover it. 00:28:22.454 [2024-12-06 19:26:32.614748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.614783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.614893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.614920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.615036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.615063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.615175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.615202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.615379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.615441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.615726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.615768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.615865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.615895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.616962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.616993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.617895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.617922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.618922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.618950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.619883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.619978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.620893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.620921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.621068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.621094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.621352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.621418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.621613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.621640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.621760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.621800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.621896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.621925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.622915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.622941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.623079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.623106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.623217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.623245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.623525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.623583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.455 qpair failed and we were unable to recover it. 00:28:22.455 [2024-12-06 19:26:32.623677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.455 [2024-12-06 19:26:32.623703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.623813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.623849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.623995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.624886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.625904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.625930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.626964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.626989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.627924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.627949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.628945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.628971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.629080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.629111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.629229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.629256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.629376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.629402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.456 [2024-12-06 19:26:32.629521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.456 [2024-12-06 19:26:32.629546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.456 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.629693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.629811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.629883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.629908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.630890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.630916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.631958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.631984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.632847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.632988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.633868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.633976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.634885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.634911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.635961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.635988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.636101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.636127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.636267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.636294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.457 [2024-12-06 19:26:32.636376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.457 [2024-12-06 19:26:32.636401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.457 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.636508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.636535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.636646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.636677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.636789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.636815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.636907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.636933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.637869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.637977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.638896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.638922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.639925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.639950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.640824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.640990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.641914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.641939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.458 [2024-12-06 19:26:32.642967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.458 [2024-12-06 19:26:32.642992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.458 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.643927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.643953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.644909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.644934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.645956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.645988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.646889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.646935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.647933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.647958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.648927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.648952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.459 [2024-12-06 19:26:32.649659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.459 [2024-12-06 19:26:32.649704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.459 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.649828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.649855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.649971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.649996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.650899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.650987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.651903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.651996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.652907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.652941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.653881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.653907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.654953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.655985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.656012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.460 qpair failed and we were unable to recover it. 00:28:22.460 [2024-12-06 19:26:32.656135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.460 [2024-12-06 19:26:32.656160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.656922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.656947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.657970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.657996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.658941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.658969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.659839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.659866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.660004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.461 [2024-12-06 19:26:32.660043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.461 qpair failed and we were unable to recover it. 00:28:22.461 [2024-12-06 19:26:32.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.660326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.660491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.660630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.660755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.660913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.660960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.661943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.661969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.662902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.662998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.663320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.663497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.663619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.663822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.663870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.462 [2024-12-06 19:26:32.664900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.462 qpair failed and we were unable to recover it. 00:28:22.462 [2024-12-06 19:26:32.664994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.665957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.666869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.666984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.667061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.667256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.667349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.667531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.667568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.667702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.667731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.667857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.667884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.668832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.668968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.669897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.669923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.670009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.670035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.463 [2024-12-06 19:26:32.670172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.463 qpair failed and we were unable to recover it. 00:28:22.463 [2024-12-06 19:26:32.670264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.670398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.670535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.670650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.670783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.670935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.670960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.671931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.671956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.672865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.672981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.673925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.673956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.674945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.674978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.675086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.675117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.675223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.675257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.464 [2024-12-06 19:26:32.675401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.464 [2024-12-06 19:26:32.675435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.464 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.675580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.675606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.675754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.675781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.675906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.675950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.676963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.676989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.677965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.677997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.678970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.679082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.679225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.679251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.679368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.679394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.465 qpair failed and we were unable to recover it. 00:28:22.465 [2024-12-06 19:26:32.679476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.465 [2024-12-06 19:26:32.679502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.679587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.679725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.679751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.679892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.679918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.679998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.680876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.680988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.681883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.681908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.682958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.682984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.683868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.683893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.684007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.684033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.684120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.684151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.684238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.684264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.684373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.684399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.466 qpair failed and we were unable to recover it. 00:28:22.466 [2024-12-06 19:26:32.684514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.466 [2024-12-06 19:26:32.684539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.684619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.684644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.684767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.684792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.684886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.684914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.685936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.685961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.686965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.686992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.687967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.687993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.467 qpair failed and we were unable to recover it. 00:28:22.467 [2024-12-06 19:26:32.688839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.467 [2024-12-06 19:26:32.688868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.688945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.688971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.689950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.689976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.690888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.690922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.691887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.691920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.692959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.692985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.693965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.693992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.694109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.468 [2024-12-06 19:26:32.694159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.468 qpair failed and we were unable to recover it. 00:28:22.468 [2024-12-06 19:26:32.694322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.694353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.694523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.694555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.694722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.694868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.694899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.695955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.695982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.696122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.696302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.696503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.696690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.696872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.696987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.697153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.697318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.697498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.697683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.697859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.698901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.698995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.699945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.699971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.700051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.469 [2024-12-06 19:26:32.700079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.469 qpair failed and we were unable to recover it. 00:28:22.469 [2024-12-06 19:26:32.700191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.700956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.700983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.701890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.701980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.702856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.702881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.703893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.703986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.704011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.704136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.704164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.470 qpair failed and we were unable to recover it. 00:28:22.470 [2024-12-06 19:26:32.704279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.470 [2024-12-06 19:26:32.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.704460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.704485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.704573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.704598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.704698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.704737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.704837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.704872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.705851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.705885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.706832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.706880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.707913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.707939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.708953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.708978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.471 [2024-12-06 19:26:32.709723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.471 qpair failed and we were unable to recover it. 00:28:22.471 [2024-12-06 19:26:32.709821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.709847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.709932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.709958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.710871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.710904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.711904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.711938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.712897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.712923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.713904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.713929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.714903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.714928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.715020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.715047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.715161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.472 [2024-12-06 19:26:32.715186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.472 qpair failed and we were unable to recover it. 00:28:22.472 [2024-12-06 19:26:32.715279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.715417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.715558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.715690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.715831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.715935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.715961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.716996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.717817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.717970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.718129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.718295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.718521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.718699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.718841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.718868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.719894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.719921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.720961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.473 [2024-12-06 19:26:32.720987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.473 qpair failed and we were unable to recover it. 00:28:22.473 [2024-12-06 19:26:32.721080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.721891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.721979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.722892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.722918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.723857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.723993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.724164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.724327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.724524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.724717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.724898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.724925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.725916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.474 [2024-12-06 19:26:32.725943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.474 qpair failed and we were unable to recover it. 00:28:22.474 [2024-12-06 19:26:32.726059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.726863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.727966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.728959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.728986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.729155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.729381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.729424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.729547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.729601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.729774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.729802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.729950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.729979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.730847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.730997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.731967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.475 [2024-12-06 19:26:32.731994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.475 qpair failed and we were unable to recover it. 00:28:22.475 [2024-12-06 19:26:32.732133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.732249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.732439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.732584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.732772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.732925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.732959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.733913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.733940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.734866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.734988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.735874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.735906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.736897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.737079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.737267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.737391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.476 [2024-12-06 19:26:32.737681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.476 qpair failed and we were unable to recover it. 00:28:22.476 [2024-12-06 19:26:32.737800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.737827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.737975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.738960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.738988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.739888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.740911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.740937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.741918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.741961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.742919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.477 [2024-12-06 19:26:32.742945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.477 qpair failed and we were unable to recover it. 00:28:22.477 [2024-12-06 19:26:32.743036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.743892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.743918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.744923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.744952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.745872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.745902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.746961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.746986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.478 [2024-12-06 19:26:32.747770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.478 qpair failed and we were unable to recover it. 00:28:22.478 [2024-12-06 19:26:32.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.747909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.748841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.748868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.749882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.749914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.750876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.750903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.751950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.751975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.752136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.752393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.752556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.752765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.752892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.479 qpair failed and we were unable to recover it. 00:28:22.479 [2024-12-06 19:26:32.752971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.479 [2024-12-06 19:26:32.753000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.753885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.753911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.754884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.754999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.755879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.755985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.756917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.756943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.757928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.757955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.758848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.480 qpair failed and we were unable to recover it. 00:28:22.480 [2024-12-06 19:26:32.758974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.480 [2024-12-06 19:26:32.759001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.759331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.759490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.759638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.759857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.759998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.760852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.760995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.761949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.761982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.762144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.762175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.762289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.762328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.762463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.762505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.762626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.762673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.762839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.762872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.763940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.763971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.764949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.764982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.765217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.481 qpair failed and we were unable to recover it. 00:28:22.481 [2024-12-06 19:26:32.765396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.481 [2024-12-06 19:26:32.765438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.765584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.765622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.765808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.765840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.765972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.766854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.766987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.767018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.767224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.767370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.767412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.767587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.767630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.767802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.767835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.767976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.768861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.768955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.769195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.769359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.769546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.769799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.769940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.769973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.770130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.770163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.770305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.770338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.770511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.770544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.770682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.770716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.770819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.770853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.771861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.771899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.772038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.772079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.482 qpair failed and we were unable to recover it. 00:28:22.482 [2024-12-06 19:26:32.772250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.482 [2024-12-06 19:26:32.772282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.772399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.772431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.772583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.772616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.772769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.772803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.772928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.772961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.773166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.773312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.773481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.773691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.773874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.773974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.774137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.774317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.774502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.774687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.774868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.774903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.775869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.775906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.776880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.776913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.777873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-06 19:26:32.777906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.483 qpair failed and we were unable to recover it. 00:28:22.483 [2024-12-06 19:26:32.778046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.778205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.778373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.778549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.778713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.778892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.778934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.779082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.779294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.779830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.779969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.780952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.780987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.781331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.781466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.781634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.781875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.781984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.782192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.782556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.782767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.782931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.783892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.784063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.784094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.784213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.784243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.784381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.484 [2024-12-06 19:26:32.784412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.484 qpair failed and we were unable to recover it. 00:28:22.484 [2024-12-06 19:26:32.784544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.784574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.784677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.784708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.784815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.784845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.784998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.785035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.785170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.785199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.785328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.785363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.785458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.785495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.785621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.785652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.785828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6caf30 is same with the state(6) to be set 00:28:22.485 [2024-12-06 19:26:32.786046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.786244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.786397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.786550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.786676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.786880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.786910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.787878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.787983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.788904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.788990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.789886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.790071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.790105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.790234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.790263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.790364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.790394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.485 [2024-12-06 19:26:32.790537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-06 19:26:32.790566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.485 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.790689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.790721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.790861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.790892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.791916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.791999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.792904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.792932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.793896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.793984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.794847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.794975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.486 [2024-12-06 19:26:32.795776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.486 [2024-12-06 19:26:32.795804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.486 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.795904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.795932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.796914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.796944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.797879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.797998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.798862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.798970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.799885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.799914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.800858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.800887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.801022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.801056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.801164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.801195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.487 [2024-12-06 19:26:32.801296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-06 19:26:32.801326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.487 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.801486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.801627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.801657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.801792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.801823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.801929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.801959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.802935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.802963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.803841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.803872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.804897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.804993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.805961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.805988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.806113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.806142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.806231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.488 [2024-12-06 19:26:32.806259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.488 qpair failed and we were unable to recover it. 00:28:22.488 [2024-12-06 19:26:32.806343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.806371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.806503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.806531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.806613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.806641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.806750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.806784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.806900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.806929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.807880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.807909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.808969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.808996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.809903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.809932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.810929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.810959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.489 [2024-12-06 19:26:32.811902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.489 qpair failed and we were unable to recover it. 00:28:22.489 [2024-12-06 19:26:32.811999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.812928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.812958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.813954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.813982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.814915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.814945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.815921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.815946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.816925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.816950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.817028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.817053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.490 [2024-12-06 19:26:32.817141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.490 [2024-12-06 19:26:32.817167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.490 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.817969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.817995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.818968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.818993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.819918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.819944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.820955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.820980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.821860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.821975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.491 [2024-12-06 19:26:32.822085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.491 [2024-12-06 19:26:32.822110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.491 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.822907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.823969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.823994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.824917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.824942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.825929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.825954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.826950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.826975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.827059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.492 [2024-12-06 19:26:32.827085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.492 qpair failed and we were unable to recover it. 00:28:22.492 [2024-12-06 19:26:32.827197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.827850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.827998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.828870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.828982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.829928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.829953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.830921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.830946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.831061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.831086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.831163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.831188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.831276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.493 [2024-12-06 19:26:32.831302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.493 qpair failed and we were unable to recover it. 00:28:22.493 [2024-12-06 19:26:32.831427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.831464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.831587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.831615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.831743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.831773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.831861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.831887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.831991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.832969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.832994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.833941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.833966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.834896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.834922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.835967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.835993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.836077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.836103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.836192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.836218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.836333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.836358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.494 qpair failed and we were unable to recover it. 00:28:22.494 [2024-12-06 19:26:32.836475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.494 [2024-12-06 19:26:32.836500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.836588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.836614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.836749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.836869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.836895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.837903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.837927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.838886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.838997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.839932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.839957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.840971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.840996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.841111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.841136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.841225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.841250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.841356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.495 [2024-12-06 19:26:32.841381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.495 qpair failed and we were unable to recover it. 00:28:22.495 [2024-12-06 19:26:32.841471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.841497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.841611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.841637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.841724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.841750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.841851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.841877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.842871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.842898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.843898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.843935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.844953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.844982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.845858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.845894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.846046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.846198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.846502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.496 [2024-12-06 19:26:32.846642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.496 qpair failed and we were unable to recover it. 00:28:22.496 [2024-12-06 19:26:32.846746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.846773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.846915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.846942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.847854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.847881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.848858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.848978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.849936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.849963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.850930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.850958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.851052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.851079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.851161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.851295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.497 [2024-12-06 19:26:32.851322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.497 qpair failed and we were unable to recover it. 00:28:22.497 [2024-12-06 19:26:32.851467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.851494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.851613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.851654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.851804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.851835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.851934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.851970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.852948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.852975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.853898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.853925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.854918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.854945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.855878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.855999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.856111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.856261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.856372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.856489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.498 qpair failed and we were unable to recover it. 00:28:22.498 [2024-12-06 19:26:32.856626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-06 19:26:32.856653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.856781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.856808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.856888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.856915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.857830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.857857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.858950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.859946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.859975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.860854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.861010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.861103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.861131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.499 qpair failed and we were unable to recover it. 00:28:22.499 [2024-12-06 19:26:32.861256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.499 [2024-12-06 19:26:32.861284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.861396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.861424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.861526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.861556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.861677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.861707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.861798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.861826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.861952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.862900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.862930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.863901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.863930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.500 [2024-12-06 19:26:32.864896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.500 qpair failed and we were unable to recover it. 00:28:22.500 [2024-12-06 19:26:32.864992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.865928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.865955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.866924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.866951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.867935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.868049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.868077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.868186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.868213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.868334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.868361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.501 qpair failed and we were unable to recover it. 00:28:22.501 [2024-12-06 19:26:32.868460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.501 [2024-12-06 19:26:32.868487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.868577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.868604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.868695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.868739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.868826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.868852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.868960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.869889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.869984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.870928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.870956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.871875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.871906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.872038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.502 [2024-12-06 19:26:32.872066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.502 qpair failed and we were unable to recover it. 00:28:22.502 [2024-12-06 19:26:32.872166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.872292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.872456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.872604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.872767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.872883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.872910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.873955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.873982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.874869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.874897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.875013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.503 [2024-12-06 19:26:32.875039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.503 qpair failed and we were unable to recover it. 00:28:22.503 [2024-12-06 19:26:32.875159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.875876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.875903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.876912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.876941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.877931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.877959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.878899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.878927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.879032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.879074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.879205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.879234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.879358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.879386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.879507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.879535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.504 qpair failed and we were unable to recover it. 00:28:22.504 [2024-12-06 19:26:32.879636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.504 [2024-12-06 19:26:32.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.879793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.879821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.879906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.879937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.880938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.880965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.881945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.881973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.882884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.882912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.883894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.883995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.884958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.884988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.885177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.885295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.885407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.505 [2024-12-06 19:26:32.885740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.505 qpair failed and we were unable to recover it. 00:28:22.505 [2024-12-06 19:26:32.885881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.885909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.886908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.886937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.887898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.887929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.888976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.889935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.889964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.890864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.890893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.891875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.891903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.892035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.506 [2024-12-06 19:26:32.892065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.506 qpair failed and we were unable to recover it. 00:28:22.506 [2024-12-06 19:26:32.892187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.892337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.892496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.892678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.892796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.892951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.892979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.893873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.893992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.894904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.894932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.895948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.895976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.896917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.896946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.897832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.897860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.507 [2024-12-06 19:26:32.898768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.507 [2024-12-06 19:26:32.898797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.507 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.898955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.898983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.899885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.899915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.900870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.900899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.901909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.901939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.902837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.902977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.903883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.903914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.904095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.904285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.904445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.904636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.508 [2024-12-06 19:26:32.904807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.508 qpair failed and we were unable to recover it. 00:28:22.508 [2024-12-06 19:26:32.904945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.904977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.905932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.905963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.906930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.906961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.907839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.907869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.509 [2024-12-06 19:26:32.908701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.509 [2024-12-06 19:26:32.908730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.509 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.908843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.908878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.909959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.910844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.910880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.911879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.911908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.912938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.912968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.913951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.510 [2024-12-06 19:26:32.913982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.510 qpair failed and we were unable to recover it. 00:28:22.510 [2024-12-06 19:26:32.914083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.914221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.914377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.914535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.914691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.914831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.914863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.915917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.915948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.916940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.917925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.917955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.918848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.919872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.511 [2024-12-06 19:26:32.919901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.511 qpair failed and we were unable to recover it. 00:28:22.511 [2024-12-06 19:26:32.920026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.920912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.920940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.921877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.921907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.922915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.922945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.923911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.923941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.924956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.924987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.925114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.925144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.925241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.925270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.925352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.512 [2024-12-06 19:26:32.925381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.512 qpair failed and we were unable to recover it. 00:28:22.512 [2024-12-06 19:26:32.925465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.925495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.925618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.925659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.925790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.925979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.926895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.926925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.927853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.927975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.928853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.928983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.929911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.929941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.930039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.930069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.930176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.930206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.930310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.513 [2024-12-06 19:26:32.930339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.513 qpair failed and we were unable to recover it. 00:28:22.513 [2024-12-06 19:26:32.930464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.930494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.930651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.930689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.930784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.930815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.930947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.930978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.931885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.932911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.932941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.933885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.933914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.934883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.934914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.514 [2024-12-06 19:26:32.935771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.514 qpair failed and we were unable to recover it. 00:28:22.514 [2024-12-06 19:26:32.935890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.935918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.936912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.936945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.937931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.937959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.938885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.938913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.939967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.940885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.940993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.941022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.941149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.941178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.515 qpair failed and we were unable to recover it. 00:28:22.515 [2024-12-06 19:26:32.941298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.515 [2024-12-06 19:26:32.941326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.941414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.941442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.941573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.941602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.941756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.941786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.941915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.941945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.942937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.942965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.943862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.943891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.944842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.944995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.945866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.945896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.946051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.946220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.946365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.946487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.516 [2024-12-06 19:26:32.946652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.516 qpair failed and we were unable to recover it. 00:28:22.516 [2024-12-06 19:26:32.946816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.946845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.946935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.946965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.947883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.947911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.948848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.948876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.949834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.949977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.950817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.950970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.951825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.951975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.952000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.517 [2024-12-06 19:26:32.952114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.517 [2024-12-06 19:26:32.952139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.517 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.952254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.952281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.952402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.952428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.952549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.952575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.952683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.952730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.952851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.952893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.953864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.953889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.954838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.954966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.955935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.956947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.956973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.957084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.957110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.957199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.518 [2024-12-06 19:26:32.957225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.518 qpair failed and we were unable to recover it. 00:28:22.518 [2024-12-06 19:26:32.957332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.957357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.957443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.957469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.957567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.957593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.957705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.957731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.957838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.957885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.958899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.958979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.959926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.959952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.960866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.960895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.519 [2024-12-06 19:26:32.961623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.519 [2024-12-06 19:26:32.961649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.519 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.961779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.961805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.961899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.961926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.962890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.962995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.805 [2024-12-06 19:26:32.963883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.805 qpair failed and we were unable to recover it. 00:28:22.805 [2024-12-06 19:26:32.963968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.963993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.964934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.964960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.965927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.965952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.966961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.966988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.967893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.967920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.968005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.968121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.968147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.968262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.968289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.968368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.806 [2024-12-06 19:26:32.968395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.806 qpair failed and we were unable to recover it. 00:28:22.806 [2024-12-06 19:26:32.968513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.968539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.968656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.968692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.968809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.968835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.968950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.968976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.969933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.969959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.970933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.970960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.971916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.971944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.972877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.972903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.973044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.973071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.807 [2024-12-06 19:26:32.973186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.807 [2024-12-06 19:26:32.973212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.807 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.973958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.973984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.974878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.974904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.975876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.976831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.976857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.977878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.977905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.978026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.978053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.978166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.978193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.978332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.808 [2024-12-06 19:26:32.978358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.808 qpair failed and we were unable to recover it. 00:28:22.808 [2024-12-06 19:26:32.978449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.978476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.978584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.978610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.978700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.978727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.978822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.978849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.978937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.978964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.979907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.979933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.980968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.980994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.981907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.981933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.982945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.982973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.983109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.983183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.983329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.983354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.809 qpair failed and we were unable to recover it. 00:28:22.809 [2024-12-06 19:26:32.983471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.809 [2024-12-06 19:26:32.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.983617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.983642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.983732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.983758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.983841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.983867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.983969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.983995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.984994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.985866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.985892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.986946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.987057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.987238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.987264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.987353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.987379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.987492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.987519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.810 [2024-12-06 19:26:32.987660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.810 [2024-12-06 19:26:32.987692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.810 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.987880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.987966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.988138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.988163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.988251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.988279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.988419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.988444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.988566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.988592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.988710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.989018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.989093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.989349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.989401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.989511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.989537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.989615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.989641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.989831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.989901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.990916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.990981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.991184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.991376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.991518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.991679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.991829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.991992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.992996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.993933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.993985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.994106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.994132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.994243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.994269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.811 qpair failed and we were unable to recover it. 00:28:22.811 [2024-12-06 19:26:32.994383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.811 [2024-12-06 19:26:32.994408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.994490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.994516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.994621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.994647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.994739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.994764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.994876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.994901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.995950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.995976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.996960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.997130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.997155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.997299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.997325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.997418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.997591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.997617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.997759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.997813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.998914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.998940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.812 [2024-12-06 19:26:32.999658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.812 [2024-12-06 19:26:32.999692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.812 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:32.999810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:32.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:32.999978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.000911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.000938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.001871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.001897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.002928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.002956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.003969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.003996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.004891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.004917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.005057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.005084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.005205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.005233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.005325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.005352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.813 [2024-12-06 19:26:33.005446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.813 [2024-12-06 19:26:33.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.813 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.005618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.005650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.005773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.005799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.005914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.005940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.006965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.006992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.007888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.007915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.008945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.008972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.009888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.009915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.010064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.814 [2024-12-06 19:26:33.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.814 qpair failed and we were unable to recover it. 00:28:22.814 [2024-12-06 19:26:33.010217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.010369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.010519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.010631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.010817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.010924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.010951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.011930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.011958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.012940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.012968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.013946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.013974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.014916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.014944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.015066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.015240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.015268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.015404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.015447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.815 [2024-12-06 19:26:33.015600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.815 [2024-12-06 19:26:33.015629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.815 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.015762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.015791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.015958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.016915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.016944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.017849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.017971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.018891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.018919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.019844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.019872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.020913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.020941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.021063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.021090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.816 [2024-12-06 19:26:33.021195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.816 [2024-12-06 19:26:33.021224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.816 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.021367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.021396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.021543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.021571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.021676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.021706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.021831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.021860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.021950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.021978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.022853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.022999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.023896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.023929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.024903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.025869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.025992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.817 [2024-12-06 19:26:33.026736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.817 qpair failed and we were unable to recover it. 00:28:22.817 [2024-12-06 19:26:33.026859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.026886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.027861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.027888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.028941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.028970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.029162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.029315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.029471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.029626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.029814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.029972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.030930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.031837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.032149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.032338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.032457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.032618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.818 qpair failed and we were unable to recover it. 00:28:22.818 [2024-12-06 19:26:33.032810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.818 [2024-12-06 19:26:33.032839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.032962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.032991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.033854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.033883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.034834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.034862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.035014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.035042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.035195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.035224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.035381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.035527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.035775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.035848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.036882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.036910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.037949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.037977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.038099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.038127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.819 [2024-12-06 19:26:33.038245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.819 [2024-12-06 19:26:33.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.819 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.038366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.038395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.038545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.038574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.038701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.038730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.038855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.038883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.038982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.039860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.039888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.040889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.040995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.041144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.041486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.041627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.041820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.041848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.042876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.042905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.043863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.043998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.044029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.044191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.820 [2024-12-06 19:26:33.044220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.820 qpair failed and we were unable to recover it. 00:28:22.820 [2024-12-06 19:26:33.044308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.044338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.044467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.044497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.044620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.044650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.044817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.044848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.044980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.045889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.045919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.046917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.046951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.047855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.047885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.048837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.048867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.049908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.049938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.050068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.050196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.821 [2024-12-06 19:26:33.050226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.821 qpair failed and we were unable to recover it. 00:28:22.821 [2024-12-06 19:26:33.050380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.050410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.050533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.050562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.050719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.050750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.050902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.050931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.051847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.051877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.052896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.052993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.053180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.053365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.053521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.053711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.053867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.053905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.054971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.054999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.055136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.055167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.055330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.055361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.055512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.055542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.055680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.055711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.055865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.055896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.056051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.056081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.056228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.056259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.056412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.056441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.822 [2024-12-06 19:26:33.056564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.822 [2024-12-06 19:26:33.056594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.822 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.056748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.056779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.056928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.056957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.057923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.057953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.058869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.058900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.059930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.059961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.060935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.060965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.061901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.061931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.062033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.062062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.062189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.062218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.823 qpair failed and we were unable to recover it. 00:28:22.823 [2024-12-06 19:26:33.062352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.823 [2024-12-06 19:26:33.062381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.062542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.062729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.062759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.062854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.062883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.062981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.063334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.063674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.063837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.063868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.064816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.065881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.065911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.066927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.066978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.824 [2024-12-06 19:26:33.067138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.824 [2024-12-06 19:26:33.067172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.824 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.067305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.067485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.067517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.067682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.067715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.067848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.067880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.067976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.068192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.068353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.068519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.068732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.068881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.068916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.069832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.069867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.070862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.070897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.071057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.071090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.071271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.071305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.071452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.071487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.071651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.071706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.071859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.071896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.072116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.072302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.072468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.072649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.072841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.072992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.073164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.073199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.073309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.073345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.825 [2024-12-06 19:26:33.073492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.825 [2024-12-06 19:26:33.073527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.825 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.073701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.073736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.073881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.073917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.074958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.074996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.075143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.075180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.075356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.075507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.075542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.075691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.075728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.075911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.075946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.076951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.076987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.077141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.077176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.077361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.077397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.077506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.077545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.077699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.077737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.077891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.077927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.078069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.078105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.078251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.078288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.078444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.078479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.078618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.078653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.078808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.078845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.079916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.826 [2024-12-06 19:26:33.080115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.826 [2024-12-06 19:26:33.080150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.826 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.080329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.080363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.080478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.080515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.080703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.080739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.080884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.080921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.081111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.081147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.081334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.081371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.081495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.081536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.081700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.081852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.081891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.082897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.083117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.083153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.083312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.083350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.083465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.083501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.083653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.083710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.083828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.083864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.084879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.084916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.085074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.085292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.085328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.085506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.085542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.085688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.085725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.085900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.085936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.086090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.086277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.086313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.086462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.086501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.086616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.827 [2024-12-06 19:26:33.086651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.827 qpair failed and we were unable to recover it. 00:28:22.827 [2024-12-06 19:26:33.086817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.086852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.087034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.087069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.087244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.087280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.087456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.087490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.087594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.087629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.087832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.087866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.088916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.088958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.089954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.089989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.090207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.090350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.090394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.090558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.090592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.090742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.090777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.090924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.090959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.091109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.091295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.091440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.091649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.091987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.092021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.092171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.092207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.092387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.092422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.092544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.092598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.092837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.092966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.093004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.093161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.093199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.828 [2024-12-06 19:26:33.093350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.828 [2024-12-06 19:26:33.093386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.828 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.093536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.093573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.093796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.093944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.093981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.094132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.094174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.094356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.094392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.094541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.094577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.094731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.094768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.094887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.094924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.095104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.095141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.095256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.095292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.095406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.095441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.095618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.095654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.095844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.095880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.096064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.096217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.096405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.096613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.096834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.096998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.097035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.097221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.097264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.097406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.097441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.098162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.098335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.098371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.098511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.098546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.098699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.098736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.098916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.098952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.099113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.099148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.099297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.099333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.829 [2024-12-06 19:26:33.099453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.829 [2024-12-06 19:26:33.099489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.829 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.099637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.099689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.099845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.099880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.100062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.100098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.100254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.100290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.100470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.100505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.100654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.100699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.100879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.100915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.101074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.101453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.101489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.101676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.101712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.101895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.101937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.102129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.102164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.102282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.102494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.102531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.102644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.102692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.102876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.102913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.103108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.103144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.103259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.103295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.103481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.103518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.103701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.103740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.103856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.103894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.104098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.104135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.104291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.104328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.104510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.104548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.104708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.104746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.104899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.104936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.105085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.105314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.105351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.105501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.105539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.105678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.105716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.105858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.105895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.106046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.106084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.106271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.106308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.106458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.830 [2024-12-06 19:26:33.106495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.830 qpair failed and we were unable to recover it. 00:28:22.830 [2024-12-06 19:26:33.106627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.106685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.106808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.106846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.106998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.107034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.107224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.107261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.107370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.107407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.107595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.107632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.107774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.107813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.108011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.108048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.108168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.108206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.108575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.108612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.108820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.108858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.109035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.109075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.109225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.109268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.109423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.109460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.109604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.109642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.109800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.109844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.110001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.110048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.110238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.110275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.110426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.110463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.110578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.110617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.110825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.111052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.111237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.111422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.111597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.111965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.112362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.112560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.112723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.831 [2024-12-06 19:26:33.112942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.831 [2024-12-06 19:26:33.112979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.831 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.113114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.113151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.113298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.113335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.113487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.113524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.113720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.113872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.113909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.114064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.114103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.114265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.114302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.114448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.114486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.114679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.114718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.114873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.114910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.115098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.115135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.115302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.115339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.115535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.115572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.115726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.115764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.115910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.115947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.116142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.116183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.116334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.116371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.116519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.116557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.116747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.116785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.116903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.116941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.117095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.117133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.117284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.117322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.117471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.117509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.117708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.117752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.117914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.117952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.118109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.118147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.118263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.118301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.118506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.118544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.118730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.118768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.118911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.118948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.119107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.119153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.119266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.119304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.119445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.119482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.119674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.119713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.832 qpair failed and we were unable to recover it. 00:28:22.832 [2024-12-06 19:26:33.119895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.832 [2024-12-06 19:26:33.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.120900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.120938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.121126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.121163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.121334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.121373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.121556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.121593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.121707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.121745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.121891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.121928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.122107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.122154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.122338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.122384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.122551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.122589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.122755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.122793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.122953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.122990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.123132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.123169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.123329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.123366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.123479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.123517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.123684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.123723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.123878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.124082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.124119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.124285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.124324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.124476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.124637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.124705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.124894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.124932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.125072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.125209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.125612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.125810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.125985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.126024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.126146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.833 [2024-12-06 19:26:33.126183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-12-06 19:26:33.126338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.126375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.126524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.126562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.126713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.126752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.126934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.126979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.127138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.127176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.127328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.127365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.127476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.127513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.127723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.127864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.127901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.128063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.128101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.128261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.128299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.128437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.128475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.128620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.128657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.128844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.128882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.129082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.129227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.129264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.129411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.129449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.129627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.129674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.129816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.129853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.130044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.130260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.130456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.130655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.130852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.130965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.131004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.131194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.131239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.131422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.131459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.131648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.131698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.131853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.131890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.132040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.132077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-12-06 19:26:33.132219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.834 [2024-12-06 19:26:33.132256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.132408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.132452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.132567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.132604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.132778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.132817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.132980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.133183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.133380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.133533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.133753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.133940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.133977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.134110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.134326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.134366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.134521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.134724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.134761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.134942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.134979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.135141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.135178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.135293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.135330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.135481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.135520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.135704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.135742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.135939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.135988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.136148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.136187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.136380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.136417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.136561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.136598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.136768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.136806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.136956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.136993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.137141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.137178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.137357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.137395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.137518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.137555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.137701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.137892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.137930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.138947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.138984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.139168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.139205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-12-06 19:26:33.139330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.835 [2024-12-06 19:26:33.139368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.139510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.139547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.139679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.139718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.139932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.139971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.140133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.140173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.140293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.140332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.140472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.140522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.140687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.140913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.140958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.141153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.141192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.141375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.141415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.141547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.141588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.141759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.141800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.141958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.141997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.142185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.142225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.142386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.142425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.142582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.142621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.142790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.142829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.143002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.143198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.143237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.143423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.143462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.143585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.143624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.143845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.143903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.144048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.144091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.144265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.144305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.144462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.144501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.144626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.144676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.144874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.144912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.145110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.145301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.145472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.145674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.145843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.146012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.836 [2024-12-06 19:26:33.146239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.836 qpair failed and we were unable to recover it. 00:28:22.836 [2024-12-06 19:26:33.146402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.146441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.146634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.146815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.146853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.146964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.147002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.147155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.147193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.147343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.147401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.147564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.147778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.147820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.147969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.148009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.148167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.148207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.148379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.148418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.148571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.148611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.148792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.148833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.148970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.149018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.149175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.149216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.149349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.149588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.149795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.149836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.150040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.150078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.150210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.150248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.150374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.150412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.150607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.150645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.150823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.150861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.151053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.151216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.151421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.151618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.151840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.151980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.152138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.152364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.152565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.152764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.152954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.152994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.153115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.153155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.153392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.153517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.153557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.837 qpair failed and we were unable to recover it. 00:28:22.837 [2024-12-06 19:26:33.153713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.837 [2024-12-06 19:26:33.153753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.153912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.153951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.154074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.154275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.154315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.154438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.154479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.154635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.154684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.154851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.154891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.155058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.155097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.155254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.155293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.155421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.155461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.155656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.155705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.155896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.155935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.156097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.156136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.156324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.156364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.156509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.156548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.156713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.156754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.156909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.156955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.157156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.157321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.157362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.157487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.157528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.157699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.157736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.157902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.157942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.158095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.158133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.158332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.158494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.158532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.158723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.158762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.158927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.158967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.159134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.159173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.159335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.159374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.159527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.159564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.159717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.159756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.159922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.159961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.160133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.160172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.160299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.160337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.838 [2024-12-06 19:26:33.160491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.838 [2024-12-06 19:26:33.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.838 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.160719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.160921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.160960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.161127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.161165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.161286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.161325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.161527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.161566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.161699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.161741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.161895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.161935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.162155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.162206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.162444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.162508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.162648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.162696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.162859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.162902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.163029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.163071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.163231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.163274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.163470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.163508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.163693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.163732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.163925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.163964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.164168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.164293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.164331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.164484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.164522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.164642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.164815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.164859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.165025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.165207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.165245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.165373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.165411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.165519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.165588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.165789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.165831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.166041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.166225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.166422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.166615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.166835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.166994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.167033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.167197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.167235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.167385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.167424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.167553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.167591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.839 [2024-12-06 19:26:33.167769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.839 [2024-12-06 19:26:33.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.839 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.167981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.168110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.168148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.168305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.168488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.168526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.168691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.168731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.168877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.169068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.169107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.169270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.169308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.169445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.169484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.169608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.169649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.169815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.169857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.170027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.170065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.170234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.170272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.170430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.170469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.170722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.170885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.170924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.171945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.171984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.172163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.172347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.172512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.172552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.172725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.172771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.172940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.172978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.173096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.173134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.173260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.173299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.173444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.173517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.173659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.173711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.173844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.173883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.174085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.174122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.174281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.174318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.174510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.174552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.174691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.174733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.174863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.840 [2024-12-06 19:26:33.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.840 qpair failed and we were unable to recover it. 00:28:22.840 [2024-12-06 19:26:33.175058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.175111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.175261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.175297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.175456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.175495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.175650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.175697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.175851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.175888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.176074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.176110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.176260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.176302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.176469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.176508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.176622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.176661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.176863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.176902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.177056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.177094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.177244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.177284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.177466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.177618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.177657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.177857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.178083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.178139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.178304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.178355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.178575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.178622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.178870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.178919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.179075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.179121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.179339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.179387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.179597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.179651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.179883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.179929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.180118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.180163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.180352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.180397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.180585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.180633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.180830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.180877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.181091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.181136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.181298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.181350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.181528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.181572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.181738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.181789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.182013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.182059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.182289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.182334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.182518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.182569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.182806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.182852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.183000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.183047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.183225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.183270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.183422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.841 [2024-12-06 19:26:33.183466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.841 qpair failed and we were unable to recover it. 00:28:22.841 [2024-12-06 19:26:33.183643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.183698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.183878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.183923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.184075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.184120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.184289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.184334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.184513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.184558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.184707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.184752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.184966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.185011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.185170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.185219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.185404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.185471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.185674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.185720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.185897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.185941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.186076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.186121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.186288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.186338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.186534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.186579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.186764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.186810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.186963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.187187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.187232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.187453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.187503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.187689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.187735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.187891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.188124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.188168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.188358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.188402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.188580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.188624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.188848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.188997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.189045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.189200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.189386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.189430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.189608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.189653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.189847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.189896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.190051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.190096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.190276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.190331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.190475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.190720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.190766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.190895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.190940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.191128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.191173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.191296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.191344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.191530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.191575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.191752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.842 [2024-12-06 19:26:33.191798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.842 qpair failed and we were unable to recover it. 00:28:22.842 [2024-12-06 19:26:33.191930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.191975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.192156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.192205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.192371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.192419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.192597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.192644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.192802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.192846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.193023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.193068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.193254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.193300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.193510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.193564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.193769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.193821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.194020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.194072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.194326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.194377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.194580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.194625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.194873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.194930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.195111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.195161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.195381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.195431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.195677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.195744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.195968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.196014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.196236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.196437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.196481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.196637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.196694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.196883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.196933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.197111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.197155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.197332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.197378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.197592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.197636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.197825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.197869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.198116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.198292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.198337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.198536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.198583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.198816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.198864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.199055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.199107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.199316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.199363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.199560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.199612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.199818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.843 [2024-12-06 19:26:33.199867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.843 qpair failed and we were unable to recover it. 00:28:22.843 [2024-12-06 19:26:33.200067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.200113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.200282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.200330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.200489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.200539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.200769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.200996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.201041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.201220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.201266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.201445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.201490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.201683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.201730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.201879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.201924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.202104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.202172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.202331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.202401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.202633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.202724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.202944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.203123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.203172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.203367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.203420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.203614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.203696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.203852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.203900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.204056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.204103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.204293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.204346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.204573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.204621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.204789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.204838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.204992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.205039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.205226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.205273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.205474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.205698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.205940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.205991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.206183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.206239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.206432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.206662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.206740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.206935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.206983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.207138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.207187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.207349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.207397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.207619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.207679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.207882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.207935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.208385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.208433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.208634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.208692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.844 qpair failed and we were unable to recover it. 00:28:22.844 [2024-12-06 19:26:33.208855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.844 [2024-12-06 19:26:33.208904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.209097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.209150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.209334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.209382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.209589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.209637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.209870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.209917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.210143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.210337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.210387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.210539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.210591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.210829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.210878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.211083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.211132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.211307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.211354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.211550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.211598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.211789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.211839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.212022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.212069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.212255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.212305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.212479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.212546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.212798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.212847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.213015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.213083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.213254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.213303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.213480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.213527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.213689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.213738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.213921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.213974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.214172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.214220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.214397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.214448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.214630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.214689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.214890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.214936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.215138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.215334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.215385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.215585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.215635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.215823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.215879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.216029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.216076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.216268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.216320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.216515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.216562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.216730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.216783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.216994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.217041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.217191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.217238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.217446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.217497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.845 [2024-12-06 19:26:33.217726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.845 [2024-12-06 19:26:33.217775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.845 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.218021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.218178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.218227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.218443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.218495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.218708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.218756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.218915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.218967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.219167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.219220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.219438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.219489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.219714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.219762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.219938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.219984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.220205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.220256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.220461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.220510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.220658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.220719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.220882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.220925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.221081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.221398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.221609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.221660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.221890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.221936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.222155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.222199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.222386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.222430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.222621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.222680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.222914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.222959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.223192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.223343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.223388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.223617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.223680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.223885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.224119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.224183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.224359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.224404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.224597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.224644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.224879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.224933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.225227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.225489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.225537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.225704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.225775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.225957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.226020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.226187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.226249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.226463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.226515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.226694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.226758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.226994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.227040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.227219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.227262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.227416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.227460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.227608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.846 [2024-12-06 19:26:33.227683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.846 qpair failed and we were unable to recover it. 00:28:22.846 [2024-12-06 19:26:33.227908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.227953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.228138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.228180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.228353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.228417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.228619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.228676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.228907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.228950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.229131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.229601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.229644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.229816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.229880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.230100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.230264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.230311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.230513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.230761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.230804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.231028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.231080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.231268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.231331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.231556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.231604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.231812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.231875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.232080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.232140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.232470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.232680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.232724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.232898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.233120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.233162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.233378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.233421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.233625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.233685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.233853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.233921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.234116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.234161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.234353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.234398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.234549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.234594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.234771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.234822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.234954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.235004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.235229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.235270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.235452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.235502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.235714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.235762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.235908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.235956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.236135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.236178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.236382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.236601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.236643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.236861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.236905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.237108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.237312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.847 [2024-12-06 19:26:33.237354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.847 qpair failed and we were unable to recover it. 00:28:22.847 [2024-12-06 19:26:33.237495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.237537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.237708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.237878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.237920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.238153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.238291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.238339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.238513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.238555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.238721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.238763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.238932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.238976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.239152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.239383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.239426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.239559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.239600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.239771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.239965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.240007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.240192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.240239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.240377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.240421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.240598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.240641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.240794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.240837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.240958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.241001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.241225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.241269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.241473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.241515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.241690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.241733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.241909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.241953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.242123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.242166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.242369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.242411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.242622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.242674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.242828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.242871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.243047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.243089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.243278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.243321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.243451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.243493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.243653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.243706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.243848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.243898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.244046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.244099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.244270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.244315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.244489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.244531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.244705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.244748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.244932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.245123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.245166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.245337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.245379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.245554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.848 [2024-12-06 19:26:33.245596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.848 qpair failed and we were unable to recover it. 00:28:22.848 [2024-12-06 19:26:33.245774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.245820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.245994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.246043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.246205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.246250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.246406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.246451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.246679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.246725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.246904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.246965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.247184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.247232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.247415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.247460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.247610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.247656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.247852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.247896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.248051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.248098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.248330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.248383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.248545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.248760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.248805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.248971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.249016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.249176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.249225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.249452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.249506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.249686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.249735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.249926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.249974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.250172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.250220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.250381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.250428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.250628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.250710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.250921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.251138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.251185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.251372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.251421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.251650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.251709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.251936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.251982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.252129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.252355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.252407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.252601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.252648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.252852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.252900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.253120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.253168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.253359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.253420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.253592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.253640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.253875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.253924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.254072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.254120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.254307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.254355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.254817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.254869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.255088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.255140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.255343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.255394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.255587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.255858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.255915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.849 qpair failed and we were unable to recover it. 00:28:22.849 [2024-12-06 19:26:33.256116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.849 [2024-12-06 19:26:33.256169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.256404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.256454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.256629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.256692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.256875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.256926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.257163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.257411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.257460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.257707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.257756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.257896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.258167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.258214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.258389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.258437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.258600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.258649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.258859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.259037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.259084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.259237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.259284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.259512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.259565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.259754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.259803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.260023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.260071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.260257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.260309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.260534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.260586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.260765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.260815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.261031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.261293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.261344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.261546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.261768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.261824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.261983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.262034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.262229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.262279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.262444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.262495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.262642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.262705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.262948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.262999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.263251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.263310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.263482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.263533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.263745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.263795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.263975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.264025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.264203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.264259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.264500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.264551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.264737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.264789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.264991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.265041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.265246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.265296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.265508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.265564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.265770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.265826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.265976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.266026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.266282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.266450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.266500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.266711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.850 [2024-12-06 19:26:33.266768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.850 qpair failed and we were unable to recover it. 00:28:22.850 [2024-12-06 19:26:33.266940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.266991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.267176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.267227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.267470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.267521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.267756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.267808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.268021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.268227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.268278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.268477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.268788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.269034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.269270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.269320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.269549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.269600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.269784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.269838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.270124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.270202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.270481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.270642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.270741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.270967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.271020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.271192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.271246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.271444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.271499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.271738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.271792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.271979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.272036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.272247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.272299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.272515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.272566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.272795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.272846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.273035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.273085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.273291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.273341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.273546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.273791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.273844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.274041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.274092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.274290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.274341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.274507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.274565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.274764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.274816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.275028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.275080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.275250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.275301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.275501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.275550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.275752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.275804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.276005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.276061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.276262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.276312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.276511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.276561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.276803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.276853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.277104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.277155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.277366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.277418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.277611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.277662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.851 [2024-12-06 19:26:33.277924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.851 [2024-12-06 19:26:33.277975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.851 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.278167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.278219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.278457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.278512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.278692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.278744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.278912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.278963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.279178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.279229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.279391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.279441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.279636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.279699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.279907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.279964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.280172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.280225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.280435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.280678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.280730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.280965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.281014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.281226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.281283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.281498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.281550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.281734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.281791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.282031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.282081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.282239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.282289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.282462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.282512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.282715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.282769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.282939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.282991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.283182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.283232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.283470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.283521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.283732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.283791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.284028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.284080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.284267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.284322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.284486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.284537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.284735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.284787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.285021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.285072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.852 [2024-12-06 19:26:33.285338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.852 qpair failed and we were unable to recover it. 00:28:22.852 [2024-12-06 19:26:33.285559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.285924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.285982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.286219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.286272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.286517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.286581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.286826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.287128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.287192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.287401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.287452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.287698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.287762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.287978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.288031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.288246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.288299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.288516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.288569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.288787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.288839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.289003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.289066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.289288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.289340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.289544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.289597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.289853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.289909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.290139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.290294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.290344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.290590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.290793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.290850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.291072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.291123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.291333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.291388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.291556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.291605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.291819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.291870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.292066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.292125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.292341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.292391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.292568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.292624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.292852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.292909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.293081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.293134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.293335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.293399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.293660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.293732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.293988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.294182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.294237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.294437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.294498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.294719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.294778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.295033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.295087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.295282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.853 [2024-12-06 19:26:33.295355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.853 qpair failed and we were unable to recover it. 00:28:22.853 [2024-12-06 19:26:33.295541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.295611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.295893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.295950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.296163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.296451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.296506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.296688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.296743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.297017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.297094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.297272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.297353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.297577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.297640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.297913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.297966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.298144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.298195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.298387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.298443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.298683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.298736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.298930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.298989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.299221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.299273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.299432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.299500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.299765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.299817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.300006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.300060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.300268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.300504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.300587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.300866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.301125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.301177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.301412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.301463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.301651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.301746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.301951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.302029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.302243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.302317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.302616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.302714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.302955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.303013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.303204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.303259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.303450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.303510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.303734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.303790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.304009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.304063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.304272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.304322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.304501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.304557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.304820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.304871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.305025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.305075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.305383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.305574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.305643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.305865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.305936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.306146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.854 [2024-12-06 19:26:33.306200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.854 qpair failed and we were unable to recover it. 00:28:22.854 [2024-12-06 19:26:33.306356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.306406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.306614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.306926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.306977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.307174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.307225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.307475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.307525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.307702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.307754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.307964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.308015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.308206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.308259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.308495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.308558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.308804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.308859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.309096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.309158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.309420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.309581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.309631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.309870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.309923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.310185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.310235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.310396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.310447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.310616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.310702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.310897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.310946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.311112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.311181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.311351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.311404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.311575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.311625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.311803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.311854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.312083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.312137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.312353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.312407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.312582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.312635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.312866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.312920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.313108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.313158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.313322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.313372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.313564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.313618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.313865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.313947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.314216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.314275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.314465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.314517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.314700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.314753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.315022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.315077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.315331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.315385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.315701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.315936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.316010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.316214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.316282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.855 qpair failed and we were unable to recover it. 00:28:22.855 [2024-12-06 19:26:33.316544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.855 [2024-12-06 19:26:33.316605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.316849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.316907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.317188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.317240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.317394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.317444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.317643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.317733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.317944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.317996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.318254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.318315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.318599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.318651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.318859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.319189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.319249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.319477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.319532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.319797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.319872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.320159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.320213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.320498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.320570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.320779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.320831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.321055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.321114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.321358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.321411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.321614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.321678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.321942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.321996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.322215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.322267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.322508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.322560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.322769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.322823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.323027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.323101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.323375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.323443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.323645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.323726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.323942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.324003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.324231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.324304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.324479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.324794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.324851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.325026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.325076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.325347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.325407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.325597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.856 [2024-12-06 19:26:33.325653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.856 qpair failed and we were unable to recover it. 00:28:22.856 [2024-12-06 19:26:33.325856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.325926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.326145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.326218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.326455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.326524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.326755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.327008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.327080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.327310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.327365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.327542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.327597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.327878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.327929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.328080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.328139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.328416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.328468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.328704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.328761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.329047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.329119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.329355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.329579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.329631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.329863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.329942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.330145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.330197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.330449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.330504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.330747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.330803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.330998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.331049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.331276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.331326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.331549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.331620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.331864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.331961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.332227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.332284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.332514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.332571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.332852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.332917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.333144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.333197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.333352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.333416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.333735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.333793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.333964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.334027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.334244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.334484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.334537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.334815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.334876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.335042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.335113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.335356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.335419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.335678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.335758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.336034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.336114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.336324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.336400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.336646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.857 [2024-12-06 19:26:33.336716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.857 qpair failed and we were unable to recover it. 00:28:22.857 [2024-12-06 19:26:33.336889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.336943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.337195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.337284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.337593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.337706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.337908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.337963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.338220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.338280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.338470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.338534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.338778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.338834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.338996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.339074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.339291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.339606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.339661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.339866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.339931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.340201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.340252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.340435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.340489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.340643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.340711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.340956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.341007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.341170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.341221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.341429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.341480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.341695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.341748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.341984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.342036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.342293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.342363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.342592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.342646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.342922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.342977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.343193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.343262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.343463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.343517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.343779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.343833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.344045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.344096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.344256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.344331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.344600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.344680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.344884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.344935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.345088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.345139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.345324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.345395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.345636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.345732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.345922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.345978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.346197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.346249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.346417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.346467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.346681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.346755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.347005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.347061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.347377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.347491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.347706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.347765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.348055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.858 [2024-12-06 19:26:33.348115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.858 qpair failed and we were unable to recover it. 00:28:22.858 [2024-12-06 19:26:33.348321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.348395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.348595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.348650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.348923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.348998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.349171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.349223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.349477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.349552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.349762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.349832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.350084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.350170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.350463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.350520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.350687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.350740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.350941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.350993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.351280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.351656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.351955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.352217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.352294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.352476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.352531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.352724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.352951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.353014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.353258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.353335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.353534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.353584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.353802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.353853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.354113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.354186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.354401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.354460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.354704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.354763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.354963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.355015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:22.859 [2024-12-06 19:26:33.355241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.859 [2024-12-06 19:26:33.355312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:22.859 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.355472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.355523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.355736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.355812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.356038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.356115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.356323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.356375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.356593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.356907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.356991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.357176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.357229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.357446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.357694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.357748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.357927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.357992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.358236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.358288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.358498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.358562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.358780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.358835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.359031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.359082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.359279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.135 [2024-12-06 19:26:33.359332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.135 qpair failed and we were unable to recover it. 00:28:23.135 [2024-12-06 19:26:33.359488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.359539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.359739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.359791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.359991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.360045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.360310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.360361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.360585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.360638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.360830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.360881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.361044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.361095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.361256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.361313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.361566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.361618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.361799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.361856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.362113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.362402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.362458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.362650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.362722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.362914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.362964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.363179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.363244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.363467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.363521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.363763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.363838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.364174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.364252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.364495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.364563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.364799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.364855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.365118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.365183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.365466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.365546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.365827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.365880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.366122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.366174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.366393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.366475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.366746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.366802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.367021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.367075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.367282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.367336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.367525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.367576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.367782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.367834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.368015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.368071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.368271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.368326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.368626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.368842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.368894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.369123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.369179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.369367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.369418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.369612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.369720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.369988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.136 [2024-12-06 19:26:33.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.136 qpair failed and we were unable to recover it. 00:28:23.136 [2024-12-06 19:26:33.370288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.370341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.370546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.370598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.370852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.370909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.371171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.371221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.371432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.371505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.371697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.371751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.371954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.372026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.372299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.372513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.372569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.372790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.372846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.373111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.373162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.373351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.373632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.373709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.373969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.374027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.374208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.374266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.374523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.374574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.374858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.374914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.375144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.375196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.375421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.375476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.375730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.375957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.376008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.376235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.376300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.376489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.376539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.376732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.376807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.377127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.377384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.377435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.377696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.378071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.378297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.378355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.378538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.378597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.378847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.378900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.379061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.379131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.379364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.379590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.379650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.379938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.379998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.380257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.380307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.380512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.380562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.380759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.137 [2024-12-06 19:26:33.380812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.137 qpair failed and we were unable to recover it. 00:28:23.137 [2024-12-06 19:26:33.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.381068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.381380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.381486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.381762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.382097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.382180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.382488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.382551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.382830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.382908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.383121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.383181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.383422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.383473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.383721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.383789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.384095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.384344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.384418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.384678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.384736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.384944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.384996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.385199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.385277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.385515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.385586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.385832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.385893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.386129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.386187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.386394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.386447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.386626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.386693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.387034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.387098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.387386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.387467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.387746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.387825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.388037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.388087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.388312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.388382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.388644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.388730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.388983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.389061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.389342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.389418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.389644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.389715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.390017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.390083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.390439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.390502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.390779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.390848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.391121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.391173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.391399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.391465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.391689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.391742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.391965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.392031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.392347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.392432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.392649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.392720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.138 [2024-12-06 19:26:33.392973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.138 [2024-12-06 19:26:33.393044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.138 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.393229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.393281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.393474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.393546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.393845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.393928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.394200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.394278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.394543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.394595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.394794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.394875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.395128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.395208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.395529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.395607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.395853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.395934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.396230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.396283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.396515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.396597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.396865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.396947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.397140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.397195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.397456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.397662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.397752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.398002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.398066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.398283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.398361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.398644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.398733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.398913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.399308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.399375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.399599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.399662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.399917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.400007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.400313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.400365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.400582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.400653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.400947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.401011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.401244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.401306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.401555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.401638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.401914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.401968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.402172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.402227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.402484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.402556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.402858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.402926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.403224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.403301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.403562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.403613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.403899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.403967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.404166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.404233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.404431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.404494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.404776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.404845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.405040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.139 [2024-12-06 19:26:33.405092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.139 qpair failed and we were unable to recover it. 00:28:23.139 [2024-12-06 19:26:33.405292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.405354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.405566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.405617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.405896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.405971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.406249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.406317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.406529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.406585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.406837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.406913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.407151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.407204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.407471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.407741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.407817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.408115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.408168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.408419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.408480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.408763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.408816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.409050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.409115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.409315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.409391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.409719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.409776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.409951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.410001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.410243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.410463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.410531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.410808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.410886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.411216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.411294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.411567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.411628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.411874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.411925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.412167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.412229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.412534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.412866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.412920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.413160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.413251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.413431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.413482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.413696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.414031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.414111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.414406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.414457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.414629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.414714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.414971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.140 [2024-12-06 19:26:33.415028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.140 qpair failed and we were unable to recover it. 00:28:23.140 [2024-12-06 19:26:33.415216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.415279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.415590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.415643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.415847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.415898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.416090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.416176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.416460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.416512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.416693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.416746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.416970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.417034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.417269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.417346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.417626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.417731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.417955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.418007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.418272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.418324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.418595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.418983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.419054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.419297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.419381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.419595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.419961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.420015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.420308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.420371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.420700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.420785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.421119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.421318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.421409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.421720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.421787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.422047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.422110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.422352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.422421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.422660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.422725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.422923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.422987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.423235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.423289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.423534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.423598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.424018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.424136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.424399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.424473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.424738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.424813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.425090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.425184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.425549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.425638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.426008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.426099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.426461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.426550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.426831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.426887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.427104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.427172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.427445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.427497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.141 [2024-12-06 19:26:33.427770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.141 [2024-12-06 19:26:33.427836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.141 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.428154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.428250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.428490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.428546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.428795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.428862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.429147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.429210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.429511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.429592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.429916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.429971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.430159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.430210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.430464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.430516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.430708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.430762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.430989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.431063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.431351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.431408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.431615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.431713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.432003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.432068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.432379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.432444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.432751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.432819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.433096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.433164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.433396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.433463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.433691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.433758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.434026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.434089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.434348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.434406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.434607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.434659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.434869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.434940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.435237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.435289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.435601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.435900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.436259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.436311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.436496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.436560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.436874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.436927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.437157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.437209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.437459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.437527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.437766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.437819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.437995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.438064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.438283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.438335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.438502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.438552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.438777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.438856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.439157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.439213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.439410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.439474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.439644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.142 [2024-12-06 19:26:33.439717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.142 qpair failed and we were unable to recover it. 00:28:23.142 [2024-12-06 19:26:33.439925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.440005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.440314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.440381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.440618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.440682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.440895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.440981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.441294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.441615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.441725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.442013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.442090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.442276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.442328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.442559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.442612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.442880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.442946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.443181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.443251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.443577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.443657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.443889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.443940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.444233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.444300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.444543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.444607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.444892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.444964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.445263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.445320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.445518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.445606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.445885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.445938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.446123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.446173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.446433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.446508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.446721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.446774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.447053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.447131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.447332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.447382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.447587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.447965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.448049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.448314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.448367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.448648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.448748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.448990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.449054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.449303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.449366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.449657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.449918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.449971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.450186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.450446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.450499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.450747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.450814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.451040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.451116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.451419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.451476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.451716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.143 [2024-12-06 19:26:33.451790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.143 qpair failed and we were unable to recover it. 00:28:23.143 [2024-12-06 19:26:33.452031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.452083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.452253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.452329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.452581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.452654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.452948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.453006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.453215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.453291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.453517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.453569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.453825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.453891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.454125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.454208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.454487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.454567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.454880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.455144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.455199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.455431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.455494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.455743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.455824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.456093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.456174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.456418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.456470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.456741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.456796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.456962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.457013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.457275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.457348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.457637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.457740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.457935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.457996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.458237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.458320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.458516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.458567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.458844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.458910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.459220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.459287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.459540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.459591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.459822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.459912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.460189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.460254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.460505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.460568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.460870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.460940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.461176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.461401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.461452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.461704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.461770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.462065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.462128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.462460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.144 [2024-12-06 19:26:33.462544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.144 qpair failed and we were unable to recover it. 00:28:23.144 [2024-12-06 19:26:33.462767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.462821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.463030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.463364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.463429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.463656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.463741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.464070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.464146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.464412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.464468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.464768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.464824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.465066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.465146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.465296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.465346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.465641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.465726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.465985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.466330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.466584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.466647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.466920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.466984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.467275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.467337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.467628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.467692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.467933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.467997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.468246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.468308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.468590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.468653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.468938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.469005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.469253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.469316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.469502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.469565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.469917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.470213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.470276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.470541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.470604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.470936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.471021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.471271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.471335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.471617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.471700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.471951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.472014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.472212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.472279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.472524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.472587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.472857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.472922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.473239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.473600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.473866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.473930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.474154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.474218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.474498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.474561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.474872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.145 [2024-12-06 19:26:33.474937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.145 qpair failed and we were unable to recover it. 00:28:23.145 [2024-12-06 19:26:33.475296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.475360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.475739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.476020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.476083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.476650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.476735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.477023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.477086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.477341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.477406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.477650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.477751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.477996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.478059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.478335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.478398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.478618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.478701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.478960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.479022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.479238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.479300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.479648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.479978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.480041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.480347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.480565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.480628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.480898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.480962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.481252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.481562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.481613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.481862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.481927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.482165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.482228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.482472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.482537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.482835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.482901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.483179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.483243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.483527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.483588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.483856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.483919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.484167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.484242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.484447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.484511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.484801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.484865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.485162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.485226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.485428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.485493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.485745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.485809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.486146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.486383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.486446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.486735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.486798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.487049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.487112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.487399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.146 [2024-12-06 19:26:33.487462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.146 qpair failed and we were unable to recover it. 00:28:23.146 [2024-12-06 19:26:33.487774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.487838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.488127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.488190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.488419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.488482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.488748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.488812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.489055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.489117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.489366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.489430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.489713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.489777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.490023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.490089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.490334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.490397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.490646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.490725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.490966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.491029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.491296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.491884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.491949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.492186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.492248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.492545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.492608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.492902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.492968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.493252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.493314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.493559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.493624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.493914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.493979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.494196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.494260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.494547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.494609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.494868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.494932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.495219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.495281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.495526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.495588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.495860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.495924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.496125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.496190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.496482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.496545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.496763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.496816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.496978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.497044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.497203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.497253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.497503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.497566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.497876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.497940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.498237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.498300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.498552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.498618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.498883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.498946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.499198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.499263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.147 [2024-12-06 19:26:33.499568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.147 qpair failed and we were unable to recover it. 00:28:23.147 [2024-12-06 19:26:33.499875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.499939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.500183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.500246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.500541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.500604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.500844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.500909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.501199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.501262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.501464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.501527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.501779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.501844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.502103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.502167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.502415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.502477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.502730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.502794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.503092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.503155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.503396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.503457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.503751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.503815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.504058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.504121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.504337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.504399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.504629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.504722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.505018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.505081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.505328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.505392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.505653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.505735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.506028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.506091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.506391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.506455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.506684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.506749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.507033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.507095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.507336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.507402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.507590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.507653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.507924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.507996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.508293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.508357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.508603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.508683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.508925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.508988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.509172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.509234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.509467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.509530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.509870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.510082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.510144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.510435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.510497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.510733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.510797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.511010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.511072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.511311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.511374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.511582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.511648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.148 qpair failed and we were unable to recover it. 00:28:23.148 [2024-12-06 19:26:33.511947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.148 [2024-12-06 19:26:33.512018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.512206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.512269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.512564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.512628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.512941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.513005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.513331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.513573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.513623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.513842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.513921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.514190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.514252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.514497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.514562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.514881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.514933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.515172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.515235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.515523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.515586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.515866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.515930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.516236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.516286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.516557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.516621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.516864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.516927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.517146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.517209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.517449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.517514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.517799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.517865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.518117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.518180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.518434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.518499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.518742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.518807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.519024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.519085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.519330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.519575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.519639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.519952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.520004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.520170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.520221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.520477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.520541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.520787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.520851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.521146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.521208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.521514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.521754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.521819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.522057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.522118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.522363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.149 [2024-12-06 19:26:33.522728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.149 [2024-12-06 19:26:33.522792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.149 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.523103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.523165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.523405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.523468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.523770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.523834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.524132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.524195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.524453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.524504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.524770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.524834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.525110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.525160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.525418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.525480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.525703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.525768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.526020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.526083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.526329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.526392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.526621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.526710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.527016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.527080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.527362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.527428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.527733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.527797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.527998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.528062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.528303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.528368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.528658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.528734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.529100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.529384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.529447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.529736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.529800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.530081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.530145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.530439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.530502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.530752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.530818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.531077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.531142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.531435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.531500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.531762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.531827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.532077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.532141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.532424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.532487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.532732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.532799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.533067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.533118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.533337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.533401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.533697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.533762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.534053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.534118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.534336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.534398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.534730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.534978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.535305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.150 [2024-12-06 19:26:33.535368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.150 qpair failed and we were unable to recover it. 00:28:23.150 [2024-12-06 19:26:33.535615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.535711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.535960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.536023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.536266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.536328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.536586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.536648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.536970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.537033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.537322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.537385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.537686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.537750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.538000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.538255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.538318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.538573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.538636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.538902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.538965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.539309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.539555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.539619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.539836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.539900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.540135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.540200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.540450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.540513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.540849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.540914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.541171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.541235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.541520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.541583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.541908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.542201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.542264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.542535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.542598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.542883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.542948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.543200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.543263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.543502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.543828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.544084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.544147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.544400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.544707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.544774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.544969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.545034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.545280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.545344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.545620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.545696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.545949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.546013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.546259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.546322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.546585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.546648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.546956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.547019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.547277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.547340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.547625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 19:26:33.547702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.151 qpair failed and we were unable to recover it. 00:28:23.151 [2024-12-06 19:26:33.547970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.548032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.548278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.548344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.548587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.548660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.548978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.549041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.549261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.549324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.549533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.549598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.549917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.549980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.550223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.550286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.550568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.550631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.550942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.551007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.551255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.551318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.551558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.551621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.551901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.551965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.552212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.552571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.552633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.552893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.552958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.553222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.553285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.553579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.553642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.553960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.554024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.554238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.554507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.554573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.554848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.554913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.555142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.555205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.555491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.555554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.555797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.555863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.556156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.556219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.556471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.556534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.556804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.556869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.557149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.557212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.557479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.557542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.557772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.557837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.558137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.558200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.558497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.558547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.558823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.558887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.559088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.559414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.559477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.559695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.559758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.560020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.152 [2024-12-06 19:26:33.560083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.152 qpair failed and we were unable to recover it. 00:28:23.152 [2024-12-06 19:26:33.560371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.560434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.560638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.560719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.560963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.561029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.561273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.561339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.561622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.561719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.562015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.562079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.562328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.562391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.562622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.562718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.562960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.563023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.563310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.563374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.563623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.563705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.564006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.564069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.564362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.564425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.564690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.564743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.564935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.564986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.565242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.565304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.565599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.565662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.565968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.566031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.566341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.566404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.566638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.566720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.567018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.567318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.567585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.567959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.568010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.568198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.568248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.568446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.568524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.568724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.568790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.569039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.569101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.569342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.569414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.569701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.569766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.570064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.570126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.570386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.570467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.570731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.570797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.153 [2024-12-06 19:26:33.571096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.153 [2024-12-06 19:26:33.571159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.153 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.571412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.571475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.571685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.571750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.571982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.572044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.572307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.572370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.572618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.572707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.573005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.573068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.573260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.573322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.573572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.573634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.573964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.574247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.574310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.574565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.574627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.574934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.574998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.575300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.575596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.575659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.575985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.576051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.576292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.576354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.576640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.576736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.577092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.577385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.577448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.577745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.577810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.578066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.578130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.578450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.578723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.578787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.579086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.579150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.579412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.579701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.580005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.580068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.580314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.580380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.580627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.580704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.580951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.581012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.581247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.581311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.581554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.581619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.581926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.581991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.582288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.582351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.582597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.582660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.582945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.583010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.583259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.583321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.154 [2024-12-06 19:26:33.583560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.154 [2024-12-06 19:26:33.583634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.154 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.583891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.583953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.584214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.584277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.584565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.584629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.584945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.585009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.585294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.585356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.585621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.585703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.585958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.586021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.586314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.586364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.586616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.586694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.586948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.587011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.587270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.587320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.587529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.587608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.587884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.587947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.588206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.588270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.588521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.588585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.588891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.588955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.589289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.589579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.589913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.589976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.590220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.590285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.590586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.590649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.590926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.590990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.591209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.591272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.591499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.591573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.591843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.591907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.592196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.592260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.592523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.592587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.592862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.593185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.593471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.593534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.593838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.593904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.594214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.594278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.594529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.594591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.594893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.594958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.595252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.595316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.595520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.595582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.595785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.595849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.155 [2024-12-06 19:26:33.596133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.155 [2024-12-06 19:26:33.596195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.155 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.596438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.596501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.596733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.596809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.597112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.597176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.597454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.597518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.597773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.598003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.598074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.598357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.598420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.598680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.598745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.598982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.599044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.599336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.599399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.599615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.599692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.599935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.600000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.600258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.600320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.600605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.600697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.600950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.601012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.601230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.601293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.601602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.601921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.601985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.602345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.602596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.602659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.602936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.602999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.603279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.603341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.603623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.603702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.604000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.604063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.604325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.604387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.604605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.604692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.604984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.605047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.605284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.605593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.605656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.605894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.605958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.606202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.606265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.606563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.606626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.606859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.606923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.607174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.607508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.607570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.607840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.607903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.608152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.608215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.156 [2024-12-06 19:26:33.608461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.156 [2024-12-06 19:26:33.608523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.156 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.608723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.608790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.609043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.609106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.609301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.609366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.609612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.609711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.609955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.610017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.610302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.610365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.610562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.610625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.610910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.610972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.611223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.611285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.611572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.611635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.611885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.611948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.612194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.612256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.612537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.612600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.612826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.612890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.613097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.613163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.613421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.613484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.613733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.613798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.614051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.614114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.614397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.614460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.614697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.614761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.615012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.615074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.615321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.615383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.615567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.615629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.615926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.615988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.616225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.616288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.616524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.616586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.616797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.616863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.617160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.617223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.617457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.617802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.617866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.618138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.618201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.618409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.618472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.618715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.618778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.618971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.619033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.619231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.619294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.619576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.619638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.619942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.620005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.620246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.157 [2024-12-06 19:26:33.620308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.157 qpair failed and we were unable to recover it. 00:28:23.157 [2024-12-06 19:26:33.620594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.620964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.621026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.621271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.621335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.621644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.621979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.622042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.622287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.622362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.622615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.622699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.622905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.622967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.623220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.623497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.623559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.623816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.623880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.624165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.624229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.624476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.624540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.624834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.624898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.625147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.625209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.625500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.625564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.625869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.625933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.626242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.626530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.626593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.626877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.626941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.627194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.627257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.627539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.627602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.627911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.628178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.628240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.628526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.628590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.628849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.628913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.629199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.629262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.629476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.629537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.629791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.630102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.630165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.630415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.630477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.630763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.630827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.631094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.631157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.158 [2024-12-06 19:26:33.631439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.158 [2024-12-06 19:26:33.631500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.158 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.631724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.631787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.632031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.632097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.632408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.632636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.632713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.633002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.633065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.633272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.633336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.633596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.633658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.633954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.634017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.634317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.634380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.634690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.634754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.634947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.635009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.635260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.635344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.635624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.635703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.635951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.636014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.636298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.636362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.636652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.637027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.637090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.637348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.637412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.637660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.637756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.638108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.638311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.638374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.638614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.638695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.638951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.639015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.639301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.639362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.639616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.639697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.639969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.640034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.640387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.640633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.640713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.640971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.641036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.641343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.641406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.641718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.641782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.642126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.642189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.642430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.642493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.642793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.642997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.643061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.643303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.643378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.643597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.643661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.159 [2024-12-06 19:26:33.643906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.159 [2024-12-06 19:26:33.643972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.159 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.644285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.644350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.644596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.644659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.644961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.645026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.645267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.645331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.645576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.645638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.645900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.645963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.646210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.646273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.646512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.646577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.646856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.646920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.647154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.647217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.647496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.647776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.647839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.648097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.648162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.648416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.648493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.648762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.648826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.649103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.649153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.649350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.649427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.649681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.649732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.649936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.650017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.650308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.650372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.650615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.650693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.650986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.651051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.651296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.651644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.651724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.651972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.652035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.652276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.652339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.652608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.652659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.652889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.652940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.653206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.653269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.653565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.653628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.653936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.654000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.654286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.654349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.654622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.654704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.654998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.655062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.655357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.655419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.655632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.655714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.655979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.160 [2024-12-06 19:26:33.656197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.160 [2024-12-06 19:26:33.656260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.160 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.656501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.656564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.656875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.656940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.657199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.657264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.657506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.657569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.657887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.657951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.658155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.658217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.658476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.658550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.658890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.658956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.659210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.659273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.659520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.659588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.659862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.659928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.660218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.660281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.660542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.660597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.660814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.660867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.661111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.661165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.661425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.661500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.661731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.661808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.662128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.662440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.662491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.662676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.662733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.663067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.663276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.663351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.663626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.663699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.663902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.663953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.664251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.664304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.664539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.664603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.664909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.664994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.665272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.665326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.665583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.665647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.665970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.666049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.666308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.666358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.666741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.666956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.667010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.667232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.667293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.667565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.667628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.667954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.668033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.161 [2024-12-06 19:26:33.668318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.161 [2024-12-06 19:26:33.668370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.161 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.668570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.668622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.668858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.668911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.669109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.669186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.669425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.669506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.669739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.669794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.669968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.670055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.670284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.670337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.670547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.670609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.670892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.670967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.671258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.671311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.671642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.671895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.671961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.672242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.672306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.672508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.672581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.672870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.672925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.673132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.673213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.673447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.673500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.673788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.673853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.674143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.674240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.674490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.674542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.674779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.675050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.675117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.675369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.675432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.675653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.675752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.675984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.676036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.676286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.676369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.676547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.676598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.676767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.676820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.677059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.677132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.677340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.677392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.677627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.677719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.677951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.678032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.678299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.678363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.162 [2024-12-06 19:26:33.678625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.162 [2024-12-06 19:26:33.678725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.162 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.678969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.679021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.679222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.679277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.679473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.679552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.679836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.679902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.680201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.680268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.680503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.680559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.680729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.680783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.680993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.681058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.681346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.681410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.681657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.681750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.681961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.682171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.682222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.682506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.682570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.682892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.682959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.683236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.683550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.683601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.683878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.683931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.684102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.684152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.684409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.684484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.684760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.684814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.684985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.685059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.685322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.685375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.685658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.685740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.686022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.686101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.686336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.686396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.686592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.686698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.687045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.687344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.687597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.687688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.687967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.688018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.688261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.688322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.688564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.688628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.688901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.688964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.689164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.689465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.689517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.689720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.689799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.690105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.163 [2024-12-06 19:26:33.690340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.163 [2024-12-06 19:26:33.690404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.163 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.690636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.690717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.690970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.691024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.691250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.691440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.691491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.691685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.691753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.692009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.692091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.692365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.692422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.692691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.692877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.693184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.693248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.693481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.693532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.693793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.693874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.694063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.694411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.694712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.694765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.694955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.695006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.164 [2024-12-06 19:26:33.695206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.164 [2024-12-06 19:26:33.695274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.164 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.695511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.695568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.695737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.696052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.696116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.696324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.696388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.696643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.696715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.696940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.696996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.697207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.697258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.697487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.697539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.697739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.697792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.438 qpair failed and we were unable to recover it. 00:28:23.438 [2024-12-06 19:26:33.698003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.438 [2024-12-06 19:26:33.698076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.698281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.698332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.698532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.698595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.698798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.698851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.699044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.699095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.699259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.699322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.699550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.699602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.699796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.699850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.700051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.700104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.700258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.700310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.700543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.700613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.700881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.700939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.701155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.701206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.701409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.701471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.701787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.701853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.702110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.702179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.702430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.702489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.702753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.702939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.702993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.703154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.703235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.703502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.703571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.703862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.703920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.704088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.704141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.704402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.704457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.704693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.704746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.704979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.705060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.705334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.705391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.705555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.705623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.705887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.705940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.706231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.706550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.706635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.706873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.706928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.707147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.707365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.707442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.707736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.707802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.708107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.708178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.708464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.708518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.439 qpair failed and we were unable to recover it. 00:28:23.439 [2024-12-06 19:26:33.708726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.439 [2024-12-06 19:26:33.708784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.708998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.709064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.709348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.709410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.709759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.709823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.710003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.710055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.710209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.710543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.710609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.710985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.711103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.711429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.711528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.711889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.712104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.712160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.712387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.712452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.712711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.712777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.713046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.713142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.713395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.713465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.713724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.713827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.714158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.714228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.714461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.714525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.714814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.714892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.715199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.715256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.715405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.715455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.715698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.715769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.716066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.716130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.716377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.716459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.716685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.716907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.716992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.717238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.717291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.717576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.717640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.717882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.717974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.718220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.718276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.718500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.718580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.718852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.718934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.719147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.719213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.719468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.719541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.719847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.719906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.720078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.720156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.720395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.720698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.720764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.440 qpair failed and we were unable to recover it. 00:28:23.440 [2024-12-06 19:26:33.721010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.440 [2024-12-06 19:26:33.721092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.721390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.721448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.721625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.721693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.721876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.721951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.722252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.722304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.722466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.722542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.722756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.722838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.723017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.723073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.723278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.723366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.723546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.723598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.723832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.723901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.724131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.724209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.724544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.724601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.724816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.724868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.725083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.725136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.725323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.725385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.725738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.725997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.726048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.726209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.726263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.726495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.726560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.726858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.726924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.727177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.727259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.727547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.727601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.727895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.727962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.728227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.728290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.728502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.728568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.728858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.728938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.729142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.729194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.729412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.729493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.729759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.729813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.730073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.730135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.730376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.730442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.730706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.730769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.731115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.731367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.731431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.731687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.731768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.732028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.732105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.732283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.732335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.441 qpair failed and we were unable to recover it. 00:28:23.441 [2024-12-06 19:26:33.732539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.441 [2024-12-06 19:26:33.732591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.732847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.732912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.733189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.733433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.733498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.733742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.733795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.734022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.734074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.734320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.734384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.734691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.734776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.735071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.735130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.735330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.735380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.735618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.735702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.735962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.736026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.736313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.736376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.736618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.736703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.736975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.737027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.737231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.737315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.737530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.737892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.738176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.738433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.738491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.738646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.738727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.738932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.738985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.739236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.739299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.739555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.739632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.739918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.740147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.740198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.740469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.740521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.740695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.740768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.741014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.741084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.741386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.741445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.741698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.741750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.741949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.742001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.742165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.742240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.742518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.742844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.742908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.743169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.743221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.743422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.743501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.743689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.442 qpair failed and we were unable to recover it. 00:28:23.442 [2024-12-06 19:26:33.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.442 [2024-12-06 19:26:33.744024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.744314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.744381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.744582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.744633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.744832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.744894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.745083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.745133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.745300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.745375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.745602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.745703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.745983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.746035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.746212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.746263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.746537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.746591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.746874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.747164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.747235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.747582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.747764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.747816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.748016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.748069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.748233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.748301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.748488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.748551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.748801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.748878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.749180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.749234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.749390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.749441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.749741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.749807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.750028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.750093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.750351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.750426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.750764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.750831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.751086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.751168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.751442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.751506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.751713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.751778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.752059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.752128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.752383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.752447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.752714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.752790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.753034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.753096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.753304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.753370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.753678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.443 [2024-12-06 19:26:33.753756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.443 qpair failed and we were unable to recover it. 00:28:23.443 [2024-12-06 19:26:33.754024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.754088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.754329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.754395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.754729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.754796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.755142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.755470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.755536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.755793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.756146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.756210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.756441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.756504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.756710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.756791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.757072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.757405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.757469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.757755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.757823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.758131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.758195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.758397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.758480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.758745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.758822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.759073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.759386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.759452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.759718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.759786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.760039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.760448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.760515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.760737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.760803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.761058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.761123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.761390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.761721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.761804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.762098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.762172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.762423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.762488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.762736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.762804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.763410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.763489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.763811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.763879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.764149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.764229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.764522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.764585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.764849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.764914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.765281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.765478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.765547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.765830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.765910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.766221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.766285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.444 [2024-12-06 19:26:33.766493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.444 [2024-12-06 19:26:33.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.444 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.766862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.766930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.767185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.767250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.767480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.767555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.767864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.767930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.768226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.768584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.768685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.768901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.768965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.769234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.769301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.769583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.769647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.769932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.770020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.770250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.770320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.770583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.770646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.770932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.770998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.771252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.771315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.771526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.771596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.771870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.771937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.772245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.772569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.772635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.772920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.772984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.773248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.773323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.773620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.773710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.773974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.774037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.774309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.774374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.774659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.774757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.775014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.775102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.775366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.775429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.775702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.775778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.776065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.776129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.776383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.776448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.776711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.776791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.777069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.777134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.777331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.777394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.777691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.777760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.778062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.778126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.778398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.778465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.778656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.778761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.779009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.779089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.445 [2024-12-06 19:26:33.779378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.445 [2024-12-06 19:26:33.779443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.445 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.779682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.779751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.779965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.780040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.780301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.780370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.780632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.780722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.780975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.781290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.781355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.781602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.781967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.782048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.782298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.782633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.782726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.782941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.783004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.783342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.783594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.783695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.783952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.784016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.784286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.784351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.784569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.784634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.784919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.784993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.785267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.785332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.785580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.785648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.785965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.786029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.786222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.786289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.786545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.786631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.786875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.786939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.787182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.787248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.787523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.787592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.787876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.787941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.788227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.788293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.788554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.788617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.788900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.788978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.789213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.789526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.789590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.789847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.789913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.790203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.790266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.790511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.790592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.790908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.790974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.791230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.791309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.791691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.446 [2024-12-06 19:26:33.791923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.446 [2024-12-06 19:26:33.791986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.446 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.792235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.792304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.792557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.792937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.793254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.793317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.793576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.793816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.793895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.794178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.794243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.794530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.794594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.794918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.795018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.795309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.795393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.795595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.795661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.795939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.796004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.796305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.796369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.796579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.796644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.796887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.796952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.797173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.797236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.797481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.797545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.797795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.797862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.798128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.798341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.798408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.798589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.798654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.798963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.799028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.799295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.799360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.799878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.799944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.800196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.800260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.800459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.800524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.800724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.800792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.801023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.801346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.801414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.801688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.801755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.802038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.802101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.802309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.802374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.802618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.447 [2024-12-06 19:26:33.802702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.447 qpair failed and we were unable to recover it. 00:28:23.447 [2024-12-06 19:26:33.802922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.802988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.803234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.803299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.803520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.803587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.803916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.803982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.804213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.804277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.804525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.804589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.804798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.804863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.805119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.805188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.805433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.805498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.805721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.805787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.806069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.806133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.806384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.806448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.806704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.806770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.806974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.807038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.807294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.807358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.807550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.807614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.807845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.807924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.808169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.808521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.808586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.808802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.808867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.809122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.809187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.809467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.809533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.809797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.809865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.810135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.810421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.810486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.810723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.810791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.811048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.811112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.811375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.811439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.811695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.811761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.812052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.812117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.812383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.812448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.812747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.812813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.813019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.813085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.813306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.813370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.813627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.813713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.813965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.814029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.814313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.814377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.448 [2024-12-06 19:26:33.814626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.448 [2024-12-06 19:26:33.814710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.448 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.814902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.814966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.815193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.815258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.815513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.815577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.815854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.816211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.816275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.816489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.816555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.816815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.816882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.817095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.817164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.817372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.817439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.817729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.817796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.818100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.818165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.818373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.818446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.818697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.818763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.819013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.819078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.819365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.819430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.819688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.819763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.820006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.820071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.820331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.820400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.820624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.820729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.820997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.821061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.821308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.821374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.821686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.821753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.822019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.822085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.822374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.822438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.822702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.822768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.822972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.823036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.823246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.823309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.823540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.823602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.823870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.823936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.824193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.824261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.824517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.824581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.824807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.824894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.825172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.825238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.825543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.825610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.825906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.825974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.826324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.826597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.826662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.449 [2024-12-06 19:26:33.826917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.449 [2024-12-06 19:26:33.826984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.449 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.827305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.827373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.827639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.827733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.827977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.828272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.828336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.828536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.828600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.828930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.829222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.829286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.829550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.829628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.829937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.830004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.830294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.830358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.830628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.830719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.830976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.831041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.831243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.831309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.831590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.831657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.831950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.832015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.832247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.832315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.832557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.832621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.832907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.832985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.833327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.833617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.833718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.834002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.834082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.834389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.834686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.834757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.835023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.835087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.835345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.835422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.835695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.835764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.836063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.836134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.836363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.836431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.836692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.836759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.837014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.837096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.837356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.837421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.837720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.837787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.838096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.838163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.838462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.838529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.838808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.838877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.839140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.839207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.839464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.450 [2024-12-06 19:26:33.839533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.450 qpair failed and we were unable to recover it. 00:28:23.450 [2024-12-06 19:26:33.839840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.839908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.840149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.840214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.840502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.840570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.840801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.840867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.841121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.841202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.841436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.841502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.841750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.841818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.842099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.842166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.842462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.842527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.842809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.842879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.843184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.843505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.843577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.843870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.843938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.844156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.844222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.844425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.844491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.844779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.845104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.845174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.845552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.845818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.846092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.846159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.846411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.846478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.846699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.846769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.847085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.847153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.847376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.847459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.847755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.847830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.848074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.848141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.848441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.848517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.848798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.848868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.849074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.849139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.849397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.849468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.849698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.849764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.850031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.850096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.850306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.850371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.850610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.850694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.850964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.851029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.851289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.851357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.851644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.851733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.851985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.852050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.451 qpair failed and we were unable to recover it. 00:28:23.451 [2024-12-06 19:26:33.852334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.451 [2024-12-06 19:26:33.852398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.852697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.852764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.853011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.853075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.853361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.853425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.853706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.853774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.854036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.854101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.854400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.854465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.854724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.854791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.854974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.855039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.855276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.855341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.855581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.855648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.855872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.855938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.856210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.856597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.856882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.856948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.857158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.857221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.857425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.857492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.857779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.857845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.858138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.858202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.858448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.858513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.858765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.858833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.859133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.859378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.859442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.859704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.859771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.860007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.860071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.860326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.860400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.860657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.860740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.860990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.861054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.861270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.861336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.861617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.861721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.862023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.862087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.862336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.862399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.862637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.862724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.452 [2024-12-06 19:26:33.863049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.452 qpair failed and we were unable to recover it. 00:28:23.452 [2024-12-06 19:26:33.863295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.863360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.863603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.863686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.863944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.864009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.864215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.864279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.864567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.864631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.864951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.865018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.865278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.865340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.865551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.865614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.865938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.866003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.866291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.866355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.866653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.866735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.867011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.867076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.867376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.867440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.867638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.867722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.868013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.868077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.868391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.868690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.868756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.869052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.869116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.869421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.869486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.869774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.869840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.870123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.870187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.870485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.870549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.870845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.870912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.871201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.871265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.871503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.871567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.871865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.871931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.872174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.872237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.872479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.872543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.872790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.872856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.873059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.873126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.873418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.873481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.873753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.873829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.874084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.874148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.874441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.874505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.874789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.874854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.875111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.875175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.875424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.875488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.875787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.453 [2024-12-06 19:26:33.875853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.453 qpair failed and we were unable to recover it. 00:28:23.453 [2024-12-06 19:26:33.876109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.876174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.876424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.876489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.876781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.877093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.877355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.877420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.877693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.877758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.878006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.878073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.878376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.878441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.878696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.878761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.878978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.879045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.879334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.879400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.879584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.879650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.879964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.880030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.880285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.880350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.880625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.880964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.881029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.881251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.881315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.881703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.881950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.882262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.882326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.882581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.882646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.882913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.882981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.883274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.883338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.883645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.883728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.884019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.884083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.884334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.884397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.884699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.884766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.885007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.885071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.885311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.885374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.885631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.885722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.885922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.885986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.886275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.886552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.886616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.886856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.886931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.887187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.887251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.887495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.887562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.887793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.887858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-12-06 19:26:33.888144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.454 [2024-12-06 19:26:33.888208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.888507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.888571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.888842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.888908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.889204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.889267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.889513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.889579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.889892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.889958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.890225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.890510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.890574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.890830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.890896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.891183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.891248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.891548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.891611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.891928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.891995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.892205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.892269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.892458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.892521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.892741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.892807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.893047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.893112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.893378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.893442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.893731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.893796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.894050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.894113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.894361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.894424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.894710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.894795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.895056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.895120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.895406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.895470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.895720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.895795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.896071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.896135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.896370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.896435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.896661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.896747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.897034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.897097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.897396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.897459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.897718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.897784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.898052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.898116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.898378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.898442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.898699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.898768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.899019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.899083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.899388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.899451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.899743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.899809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.900100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.900362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.900425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-12-06 19:26:33.900620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.455 [2024-12-06 19:26:33.900699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.901264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.901327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.901568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.901632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.901895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.901963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.902251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.902568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.902632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.902915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.902980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.903218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.903283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.903532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.903596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.903913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.903979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.904222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.904285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.904546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.904611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.904915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.904982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.905291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.905639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.905947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.906012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.906226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.906290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.906576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.906639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.906904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.906967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.907212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.907276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.907516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.907888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.907953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.908239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.908303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.908593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.908684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.908969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.909047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.909301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.909367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.909697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.909767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.910020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.910087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.910370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.910615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.910715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.910926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.910991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.911286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.911353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.911758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.911988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.912062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.912380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-12-06 19:26:33.912638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.456 [2024-12-06 19:26:33.912729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.913059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.913126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.913385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.913449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.913765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.913835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.914137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.914202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.914455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.914523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.914758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.914824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.915068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.915133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.915346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.915413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.915695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.915761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.915968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.916033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.916305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.916370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.916681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.916752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.917037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.917114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.917408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.917472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.917701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.917773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.918051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.918117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.918346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.918410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.918707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.918782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.919029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.919093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.919381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.919463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.919738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.919807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.920070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.920135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.920395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.920464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.920696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.920763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.920970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.921042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.921315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.921382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.921687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.921759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.922040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.922107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.922371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.922448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.922688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.922759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.923080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.923146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.923438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.923507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.923735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.923803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.924002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.924355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.457 [2024-12-06 19:26:33.924423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.457 qpair failed and we were unable to recover it. 00:28:23.457 [2024-12-06 19:26:33.924700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.924769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.925065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.925129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.925353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.925422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.925685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.925752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.925999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.926066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.926341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.926409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.926717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.926783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.927061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.927135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.927430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.927741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.927823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.928073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.928139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.928395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.928473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.928756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.929114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.929179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.929392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.929460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.929748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.929815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.930058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.930137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.930407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.930474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.930729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.930796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.931010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.931091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.931396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.931463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.931767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.932005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.932074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.932324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.932390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.932698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.932769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.933028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.933093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.933341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.933413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.933734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.933803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.934025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.934090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.934342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.934409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.934659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.934760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.935011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.935078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.935369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.935436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.935735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.935816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.936141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.936208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.936482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.936702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.936770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.936986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.458 [2024-12-06 19:26:33.937051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.458 qpair failed and we were unable to recover it. 00:28:23.458 [2024-12-06 19:26:33.937309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.937389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.937652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.937746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.937993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.938058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.938306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.938374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.938597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.938661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.938954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.939018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.939284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.939350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.939555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.939620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.939901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.939984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.940269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.940334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.940581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.940645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.940963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.941030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.941251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.941315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.941535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.941598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.941885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.941952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.942148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.942212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.942460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.942532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.942815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.942882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.943126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.943191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.943449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.943516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.943781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.943850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.944089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.944157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.944417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.944482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.944700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.944768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.945040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.945292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.945356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.945645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.945973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.946041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.946270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.946335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.946559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.946626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.946967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.947033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.947249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.947321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.947650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.947905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.947970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.948212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.948280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.948501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.948577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.948808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.949154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.459 [2024-12-06 19:26:33.949221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.459 qpair failed and we were unable to recover it. 00:28:23.459 [2024-12-06 19:26:33.949528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.949593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.949872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.949955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.950234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.950298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.950505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.950571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.950871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.950941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.951156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.951222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.951472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.951555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.951845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.951913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.952181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.952248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.952535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.952601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.952876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.952942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.953199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.953267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.953525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.953589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.953858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.953929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.954200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.954268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.954591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.954907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.954975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.955203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.955267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.955517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.955584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.955960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.956250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.956314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.956589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.956656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.956976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.957043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.957307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.957381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.957703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.957771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.958009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.958074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.958282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.958348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.958604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.958707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.959011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.959093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.959360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.959425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.959697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.959783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.960080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.960413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.960477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.960752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.960821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.961022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.961087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.961306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.961380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.961751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.962004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.460 [2024-12-06 19:26:33.962082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.460 qpair failed and we were unable to recover it. 00:28:23.460 [2024-12-06 19:26:33.962403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.962470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.962720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.962787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.963034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.963118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.963366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.963432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.963723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.963799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.964087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.964155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.964438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.964502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.964755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.964824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.965063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.965128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.965416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.965482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.965807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.965875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.966182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.966458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.966524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.966824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.966890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.967246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.967502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.967582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.967910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.968233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.968297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.968592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.968657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.968977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.969041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.969347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.969412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.969711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.969779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.969993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.970056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.970338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.970402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.970652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.970755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.971043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.971106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.971372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.971436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.971700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.971767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.972058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.972122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.972474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.972697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.972762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.972990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.973054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.973346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.973409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.973659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.461 [2024-12-06 19:26:33.973748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.461 qpair failed and we were unable to recover it. 00:28:23.461 [2024-12-06 19:26:33.974044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.974109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.974464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.974761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.974828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.975151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.975446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.975511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.975703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.975780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.976035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.976101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.976351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.976415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.976686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.976753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.977047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.977111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.977354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.977688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.977756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.978020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.978083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.978332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.978395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.978729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.978797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.979081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.979146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.979437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.979499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.979749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.979815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.980120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.980186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.980518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.980809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.980874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.981119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.981184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.981468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.981533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.981741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.981808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.982100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.982421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.982487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.982742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.982809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.983060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.983124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.983378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.983442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.983634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.983722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.983978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.984043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.984289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.984356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.984624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.984709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.985065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.985386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.985621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.985705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.985965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.986029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.986312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.462 [2024-12-06 19:26:33.986376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.462 qpair failed and we were unable to recover it. 00:28:23.462 [2024-12-06 19:26:33.986693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.986759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.987048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.987112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.987359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.987422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.987686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.987752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.987993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.988057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.988346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.988409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.988655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.988737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.989020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.989094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.989349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.989412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.989688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.989754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.990016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.990082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.990309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.990373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.990660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.990753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.990973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.991041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.991326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.991389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.991622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.991706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.991916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.991982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.992227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.992294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.992537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.992602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.992893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.992958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.993209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.993275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.993573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.993638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.993904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.993969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.994215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.994277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.994562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.994627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.994912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.994979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.995233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.995297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.995539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.995602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.995851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.995916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.996205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.996268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.996519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.996583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.996892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.996957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.997207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.997271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.997554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.997619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.997937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.998003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.998254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.998318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.998646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.463 [2024-12-06 19:26:33.998935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.463 [2024-12-06 19:26:33.999000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.463 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:33.999203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:33.999269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:33.999518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:33.999582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:33.999844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:33.999911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:34.000163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:34.000227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:34.000447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:34.000511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.464 [2024-12-06 19:26:34.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.464 [2024-12-06 19:26:34.000784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.464 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.000994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.001059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.001303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.001558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.001623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.001899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.001973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.002217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.002281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.002532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.002596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.002867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.002932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.003181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.003244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.003448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.003515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.003812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.003879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-06 19:26:34.004134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-06 19:26:34.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.004445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.004509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.004752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.004820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.005085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.005148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.005335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.005402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.005605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.005684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.005942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.006006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.006211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.006276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.006539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.006604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.006872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.006938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.007229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.007293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.007584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.007648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.007912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.007977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.008212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.008276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.008519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.008586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.008855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.008920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.009207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.009271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.009572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.009636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.009949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.010013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.010304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.010368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.010619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.010884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.011186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.011518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.011583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.011888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.011955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.012316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.012623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.012706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.013000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.013065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.013253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.013317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.013561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.014004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.014259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.014323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.014605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.014696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.014951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.015018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.015306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.015380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.015633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.015721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.015962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.016026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.016273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.016339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.016633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-06 19:26:34.016716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-06 19:26:34.016973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.017041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.017267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.017331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.017586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.017651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.017964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.018029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.018315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.018378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.018581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.018646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.018930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.018994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.019234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.019515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.019582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.019875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.019940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.020196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.020261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.020542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.020607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.020921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.020986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.021198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.021263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.021458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.021523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.021815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.021881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.022181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.022245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.022493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.022556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.022866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.022931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.023217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.023281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.023528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.023591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.023859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.023924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.024182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.024249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.024450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.024516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.024768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.024833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.025086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.025151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.025345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.025629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.025713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.025973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.026038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.026322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.026385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.026715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.027024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.027088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.027337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.027400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.027611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.027694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.027948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.028012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.028297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.028373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.028660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.028744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.029036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-06 19:26:34.029100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-06 19:26:34.029369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.029433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.029634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.029717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.029972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.030036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.030279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.030345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.030594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.030659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.030934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.030997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.031244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.031307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.031557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.031620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.031843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.031908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.032122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.032187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.032450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.032691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.032757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.033045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.033109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.033385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.033697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.033762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.034009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.034076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.034373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.034437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.034751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.034817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.035109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.035175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.035422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.035486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.035693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.035758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.035946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.036011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.036247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.036310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.036597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.036661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.036946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.037010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.037315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.037599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.037662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.037964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.038028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.038344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.038605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.038699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.038989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.039344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.039407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.039702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.039767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.040059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.040124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.040371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.040435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.040682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.040748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.040976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.041041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.041338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-06 19:26:34.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-06 19:26:34.041684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.041749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.042033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.042098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.042324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.042388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.042686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.042751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.043008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.043074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.043369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.043434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.043649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.043742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.043968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.044032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.044284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.044603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.044688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.044975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.045039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.045322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.045639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.045725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.045987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.046052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.046346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.046410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.046661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.046746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.046961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.047024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.047371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.047575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.047644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.047938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.048003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.048288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.048596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.048662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.048978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.049042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.049250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.049316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.049605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.049687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.049938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.050003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.050301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.050366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.050619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.050704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.050997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.051061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.051311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.051377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.051694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.051760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.051975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.052334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.052399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.052654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.052739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.053025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.053089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.053329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.053393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.053634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.053715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.054003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-06 19:26:34.054067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-06 19:26:34.054322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.054386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.054689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.054765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.055056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.055120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.055472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.055729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.055795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.056079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.056143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.056458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.056715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.056783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.057031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.057095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.057336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.057400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.057640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.057720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.058006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.058070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.058319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.058383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.058631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.058710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.058948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.059012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.059379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.059713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.059779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.060018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.060082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.060379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.060443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.060697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.060762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.061007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.061070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.061309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.061373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.061615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.061697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.061955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.062018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.062261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.062325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.062694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.062979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.063043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.063235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.063299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.063550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.063618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.063940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.064005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.064312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.064376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.064618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.064701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-06 19:26:34.064957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-06 19:26:34.065021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.065307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.065370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.065662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.065743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.065997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.066061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.066302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.066369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.066653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.066735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.066918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.066982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.067207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.067270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.067479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.067543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.067783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.067849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.068163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.068228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.068476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.068539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.068831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.068897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.069195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.069259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.069470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.069533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.069785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.069850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.070086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.070150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.070401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.070464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.070721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.070785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.071030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.071096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.071295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.071360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.071642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.071737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.072022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.072086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.072339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.072404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.072606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.072692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.072948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.073012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.073272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.073335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.073581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.073644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.073953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.074018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.074265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.074331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.074636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.074721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.074932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.075000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.075298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.075361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.075600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.075983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.076049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.076290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.076353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.076649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.076899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.076964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-06 19:26:34.077181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-06 19:26:34.077244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.077526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.077589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.077801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.077868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.078123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.078484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.078547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.078798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.078864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.079067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.079132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.079363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.079426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.079744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.079810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.080031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.080097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.080349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.080413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.080707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.080773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.081036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.081100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.081343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.081407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.081705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.082055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.082119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.082401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.082464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.082718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.082785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.082996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.083063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.083310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.083374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.083611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.083700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.083955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.084019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.084318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.084381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.084682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.084748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.085031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.085095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.085350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.085416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.085713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.085780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.086037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.086101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.086350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.086414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.086658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.086749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.087064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.087129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.087366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.087430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.087623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.087707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.087956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.088020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.088263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.088330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.088601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.088685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.088928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.088992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.089195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-06 19:26:34.089261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-06 19:26:34.089522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.089596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.089918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.089983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.090246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.090310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.090529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.090593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.090895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.090961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.091218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.091282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.091592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.091893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.092076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.092380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.092444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.092700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.092767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.092959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.093301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.093365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.093620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.093699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.094011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.094075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.094320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.094690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.094996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.095060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.095364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.095428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.095636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.095713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.095963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.096030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.096319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.096383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.096631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.096714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.096959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.097024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.097315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.097379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.097585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.097647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.097970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.098035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.098327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.098391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.098639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.098726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.098983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.099048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.099334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.099398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.099718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.099784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.100062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.100126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.100328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.100392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.100696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.100762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.101003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.101066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.101358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.101421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.101693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-06 19:26:34.101761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-06 19:26:34.102050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.102115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.102358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.102423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.102627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.102723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.103042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.103106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.103404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.103468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.103761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.103827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.104078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.104141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.104352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.104415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.104651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.104729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.105013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.105076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.105342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.105406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.105699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.105765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.106084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.106339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.106580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.106647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.106881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.106948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.107171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.107237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.107443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.107795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.107862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.108112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.108176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.108427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.108490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.108779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.108844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.109105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.109170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.109420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.109483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.109740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.109806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.110097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.110409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.110471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.110824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.111185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.111488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.111551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.111863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.111930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.112173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.112238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.112481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.112547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.112792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-06 19:26:34.112858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-06 19:26:34.113121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.113186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.113387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.113450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.113782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.113988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.114051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.114301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.114365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.114606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.114683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.114987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.115050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.115286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.115349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.115644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.115748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.116025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.116090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.116374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.116437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.116750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.117039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.117103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.117343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.117682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.117747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.117987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.118051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.118285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.118349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.118593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.118659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.118971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.119035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.119285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.119352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.119602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.119702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.119948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.120013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.120352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.120635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.120720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.121025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.121089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.121288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.121353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.121645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.121727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.121977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.122041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.122304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.122368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.122704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.122988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.123052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.123304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.123368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.123594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.123658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.124059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.124347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.124642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.124726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.125011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-06 19:26:34.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-06 19:26:34.125347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.125572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.125635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.125904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.125970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.126180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.126245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.126534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.126598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.126870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.126936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.127135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.127203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.127451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.127518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.127744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.127811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.128108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.128172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.128420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.128484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.128740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.128817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.129140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.129347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.129413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.129697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.129762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.130054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.130117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.130352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.130416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.130635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.130714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.130957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.131021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.131319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.131382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.131601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.131704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.131995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.132060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.132303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.132369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.132684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.132750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.133009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.133073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.133399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.133642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.133725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.133977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.134042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.134286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.134349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.134615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.134912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.134976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.135274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.135354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.135612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.135695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.136052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.136298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.136364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.136661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.136744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.137061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.137267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-06 19:26:34.137333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-06 19:26:34.137592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.137657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.137981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.138046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.138287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.138350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.138639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.138722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.139005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.139071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.139312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.139376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.139617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.139710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.139965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.140030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.140283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.140347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.140631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.140715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.140959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.141023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.141277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.141342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.141627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.141710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.141958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.142347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.142412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.142708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.142774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.143072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.143137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.143401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.143467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.143721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.143786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.144021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.144085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.144333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.144397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.144647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.144731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.144933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.144997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.145293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.145359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.145650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.145729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.145955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.146019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.146271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.146335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.146647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.146732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.147017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.147083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.147368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.147432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.147719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.147785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.148116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.148362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.148425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.148715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.148782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.149025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.149091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.149350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.149413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.149681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-06 19:26:34.150048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-06 19:26:34.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.150365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.150429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.150644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.150733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.151111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.151361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.151425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.151696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.151762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.152028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.152091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.152391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.152456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.152711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.153026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.153093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.153297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.153362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.153640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.153720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.153944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.154008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.154213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.154280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.154528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.154592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.154896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.154961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.155187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.155262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.155542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.155608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.155936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.156009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.156267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.156333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.156616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.156715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.156922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.156995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.157221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.157286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.157523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.157588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.157856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.157920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.158176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.158241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.158501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.158566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.158863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.158934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.159202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.159529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.159939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.160008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.160259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.160324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.160641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.160933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.160999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.161245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.161308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.161596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.161687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.161896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.161963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.162253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.162321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.162572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-06 19:26:34.162639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-06 19:26:34.162939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.163005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.163316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.163383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.163697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.163770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.164001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.164072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.164300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.164368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.164614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.164700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.165044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.165305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.165369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.165608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.165700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.165941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.166007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.166298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.166363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.166699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.166773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.167012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.167077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.167300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.167378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.167613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.167703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.167959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.168025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.168439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.168703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.168786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.169043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.169108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.169361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.169426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.169687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.169751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.170014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.170077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.170324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.170387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.170584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.170647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.170914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.170979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.171216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.171279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.171540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.171602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.171824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.171888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.172095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.172158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.172369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.172431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.172651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.172735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.173035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.173100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.173311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.173374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-06 19:26:34.173551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-06 19:26:34.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.173910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.174159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.174228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.174471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.174534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.174809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.174874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.175118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.175180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.175389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.175455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.175792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.176029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.176091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.176337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.176400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.176609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.176684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.176925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.176989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.177264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.177327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.177640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.177969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.178256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.178320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.178534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.178599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.178849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.178914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.179158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.179223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.179505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.179567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.179837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.179902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.180184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.180248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.180486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.180549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.180754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.181027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.181329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.181649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.181726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.182024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.182092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.182356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.182424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.182685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.182749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.183037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.183099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.183318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.183380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.183586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.183649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.183907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.183970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.184187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.184249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.184496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.184558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.184866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.185222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.185284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.185489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.185551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-06 19:26:34.185787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-06 19:26:34.185852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.186118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.186181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.186533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.186846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.187084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.187147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.187355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.187419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.187624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.187698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.187897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.187965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.188244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.188306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.188527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.188590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.188885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.188949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.189204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.189265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.189511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.189573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.189924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.190141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.190447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.190509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.190703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.190769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.191047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.191109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.191372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.191656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.191733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.191935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.191997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.192234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.192297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.192551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.192613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.192826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.193233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.193492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.193555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.193810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.193873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.193971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6caf30 (9): Bad file descriptor 00:28:23.752 [2024-12-06 19:26:34.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.194400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.194693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.194766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.195026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.195093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.195389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.195467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.195770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.195840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.196132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.196197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.196492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.196560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.196832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.196899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.197103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.197180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.197460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.197526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.197778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.197844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-06 19:26:34.198112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-06 19:26:34.198182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.198449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.198514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.198796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.198881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.199096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.199163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.199406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.199471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.199792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.199862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.200139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.200470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.200537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.200786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.200853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.201057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.201130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.201363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.201432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.201724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.201792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.202055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.202123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.202375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.202443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.202771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.202855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.203150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.203228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.203520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.203588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.203850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.203918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.204166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.204231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.204448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.204518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.204818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.204887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.205142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.205219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.205481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.205549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.205845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.205911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.206168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.206235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.206526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.206593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.206951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.207025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.207281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.207650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.207743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.208006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.208071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.208291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.208694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.208764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.209010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.209075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.209381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.209450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.209705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.209772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.210077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.210473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.210538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.210748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.210816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-06 19:26:34.211089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-06 19:26:34.211156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.211425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.211687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.211769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.212048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.212115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.212348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.212413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.212622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.212705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.213046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.213112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.213412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.213488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.213775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.213844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.214093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.214160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.214417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.214487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.214806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.215047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.215129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.215408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.215476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.215721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.215790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.216038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.216106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.216403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.216695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.216775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.217006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.217072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.217277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.217343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.217598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.217689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.217993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.218059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.218268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.218343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.218608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.218700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.218972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.219038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.219340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.219407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.219702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.219771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.220014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.220080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.220325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.220390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.220690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.220756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.221053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.221121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.221384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.221449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.221649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.221767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.222069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.222133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-06 19:26:34.222360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-06 19:26:34.222424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.222728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.222800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.223095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.223159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.223408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.223490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.223720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.223789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.224027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.224092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.224386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.224453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.224700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.224766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.225070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.225140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.225429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.225494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.225788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.225855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.226177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.226244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.226502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.226566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.226834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.226910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.227222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.227287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.227529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.227609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.227943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.228009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.228229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.228300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.228586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.228653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.228889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.228956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.229202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.229278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.229537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.229602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.229898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.229967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.230260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.230338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.230557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.230625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.230905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.230972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.231318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.231574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.231641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.231992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.232062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.232370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.232703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.232790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.233091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.233157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.233407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.233471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.233814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.233883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.234171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.234468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.234541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.234806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.234872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.235205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-06 19:26:34.235429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-06 19:26:34.235497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.235715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.235781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.235987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.236052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.236354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.236422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.236714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.236781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.237038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.237114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.237367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.237432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.237696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.237778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.238005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.238071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.238359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.238424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.238623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.238999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.239066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.239322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.239388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.239605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.239909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.239977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.240282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.240363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.240638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.240745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.241032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.241096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.241409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.241477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.241774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.241842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.242134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.242201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.242452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.242517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.242743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.242811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.243019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.243086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.243374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.243439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.243655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.243764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.243998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.244065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.244312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.244376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.244589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.244657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.244963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.245029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.245329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.245398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.245636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.245741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.246060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.246399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.246719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.247026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.247099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.247352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.247417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.247619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-06 19:26:34.247999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-06 19:26:34.248066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.248286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.248350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.248636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.248730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.249014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.249079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.249325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.249388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.249661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.249759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.249999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.250065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.250356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.250426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.250707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.250774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.251052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.251323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.251391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.251629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.252004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.252083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.252375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.252439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.252745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.252822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.253100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.253406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.253473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.253744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.253823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.254114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.254180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.254466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.254534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.254829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.254897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.255141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.255215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.255461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.255528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.255822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.255889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.256135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.256201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.256445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.256511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.256733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.256802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.257055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.257401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.257466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.257795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.258091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.258155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.258419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.258491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.258810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.258878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.259124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.259189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.259501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.259568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.259877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.259943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.260176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.260254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.260504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.260569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.757 [2024-12-06 19:26:34.260838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.757 [2024-12-06 19:26:34.260919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.757 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.261134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.261203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.261439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.261795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.261872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.262130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.262196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.262502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.262577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.262910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.262977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.263227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.263304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.263547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.263612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.263885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.263951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.264194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.264262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.264514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.264577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.264849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.264916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.265171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.265239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.265470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.265535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.265775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.265843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.266148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.266217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.266461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.266524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.266732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.266814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.267058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.267125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.267385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.267449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.267707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.267777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.268018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.268082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.268291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.268373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.268702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.268772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.269042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.269298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.269366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.269616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.269704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.269962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.270029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.270286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.270364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.270609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.270712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.271035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.271102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.271315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.271595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.271689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.271934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.272001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.272186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.272250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.272490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.272557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.272803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.272870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.758 [2024-12-06 19:26:34.273138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.758 [2024-12-06 19:26:34.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.758 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.273499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.273883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.273950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.274219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.274288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.274540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.274894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.274970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.275240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.275307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.275511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.275577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.275940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.276242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.276309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.276558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.276627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.276903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.276969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.277187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.277253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.277516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.277584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.277887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.277954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.278255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.278339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.278554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.278620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.278930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.279014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.279239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.279305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.279533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.279598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.279885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.279965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.280270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.280336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.280542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.280607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.280939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.281004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.281303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.281368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.281688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.281760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.281978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.282044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.282290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.282372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.282629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.282715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.282936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.283003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.283267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.283346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.283655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.283752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.284000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.284067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.284367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.759 [2024-12-06 19:26:34.284432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.759 qpair failed and we were unable to recover it. 00:28:23.759 [2024-12-06 19:26:34.284744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.284814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.285088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.285155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.285408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.285474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.285699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.286028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.286094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.286342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.286414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.286651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.286756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.287019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.287084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.287327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.287395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.287620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.287703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.288013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.288082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.288379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.288444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.288705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.288771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.289119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.289319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.289685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.289763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.290025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.290090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.290314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.290379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.290632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.290721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.291020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.291085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.291341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.291414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.291702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.291769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.292013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.292079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.292348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.292415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.292638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.292729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.292993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.293068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.293326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.293392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.293637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.294041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.294373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.294439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.294731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.294817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.295078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.295144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.295342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.295406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.295640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.295728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.295938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.296004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.296291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.296556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.296870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.760 [2024-12-06 19:26:34.296947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.760 qpair failed and we were unable to recover it. 00:28:23.760 [2024-12-06 19:26:34.297245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.297313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.297600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.297880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.297945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.298182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.298251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.298459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.298743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.298809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.299064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.299130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.299316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.299380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.299590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.299659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.300015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.300083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:23.761 [2024-12-06 19:26:34.300346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.761 [2024-12-06 19:26:34.300412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:23.761 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-06 19:26:34.300706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-06 19:26:34.300779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-06 19:26:34.301032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-06 19:26:34.301098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-06 19:26:34.301319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-06 19:26:34.301404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-06 19:26:34.301636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-06 19:26:34.301719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-06 19:26:34.302025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.302089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.302326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.302394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.302576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.302642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.302952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.303028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.303373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.303619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.303981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.304048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.304346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.304411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.304635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.304719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.304991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.305060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.305312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.305390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.305731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.306056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.306121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.306406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.306474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.306727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.306795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.307041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.307347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.307413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.307679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.307758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.308055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.308135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.308353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.308418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.308662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.309015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.309082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.309309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.309375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.309622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.309717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.309945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.310022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.310278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.310587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.310655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.310974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.311041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.311311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.311377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.311646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.311739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.312033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.312107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.312385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.312450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.312634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-06 19:26:34.312722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-06 19:26:34.312989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.313058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.313262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.313327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.313543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.313610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.313877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.313945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.314198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.314263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.314516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.314584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.314906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.314971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.315218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.315297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.315615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.315934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.315998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.316249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.316316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.316568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.316633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.316863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.317216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.317470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.317537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.317826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.317909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.318245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.318536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.318614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.318939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.319035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.319300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.319370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.319606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.319696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.319961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.320297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.320387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.320638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.320756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.321050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.321140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.321475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.321565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.321915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.322005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.322359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.322448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.322768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.323219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.323290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.323580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.323645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.323940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.324019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.324292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.324358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.324540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.324625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.325012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.325100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-06 19:26:34.325457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-06 19:26:34.325546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.325932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.326024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.326341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.326433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.326792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.326884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.327207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.327278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.327503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.327571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.327881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.327947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.328244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.328309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.328506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.328572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.328861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.328952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.329291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.329378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.329746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.329836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.330190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.330280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.330608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.330724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.331039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.331129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.331480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.331573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.331926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.331994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.332259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.332328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.332549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.332883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.332950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.333161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.333249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.333566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.333655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.334040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.334128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.334470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.334885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.334974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.335296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.335383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.335745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.335838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.336153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.336222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.336477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.336542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.336806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.336872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.337172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.337237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.337495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.337582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.337903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.337993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.338305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.338394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-06 19:26:34.338749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-06 19:26:34.338842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.339118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.339206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.339560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.339647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.340007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.340100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.340392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.340463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.340662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.340746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.340953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.341020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.341243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.341575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.341640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.341935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.342025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.342371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.342459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.342763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.342853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.343207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.343296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.343657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.343761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.344111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.344198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.344517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.344606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.344949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.345037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.345343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.345433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.345720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.345812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.346165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.346255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.346569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.346659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.347060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.347150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.347458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.347529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.347769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.347838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.348098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.348164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.348404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.348471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.348735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.348818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.349140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.349226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.349558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.349647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.350022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.350121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.350382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.350470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.350808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.350900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.351251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.351340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.351619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.351717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-06 19:26:34.351987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-06 19:26:34.352056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.352271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.352336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.352533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.352600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.352829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.352898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.353166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.353256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.353607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.353717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.354077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.354165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.354533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.354620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.354999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.355088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.355415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.355505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.355875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.355946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.356232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.356297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.356502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.356835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.356902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.357220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.357523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.357613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.357973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.358061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.358382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.358470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.358815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.358908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.359261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.359703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.359791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.360142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.360213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.360513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.360579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.360823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.360913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.361221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.361286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.361551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.361617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.361898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.361964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.362216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.362284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.362506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.362596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-06 19:26:34.362976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-06 19:26:34.363064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.363415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.363503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.363843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.363936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.364298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.364387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.364709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.364800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.365059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.365148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.365372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.365452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.365701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.365769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.366017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.366083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.366328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.366393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.366743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.367021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.367111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.367445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.367515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.367783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.367851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.368068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.368136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.368424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.368488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.368820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.369070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.369135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.369397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.369464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.369761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.369827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.370091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.370157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.370407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.370474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.370699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.370767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.371028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.371093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.371338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.371405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.371750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.372045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.372109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.372366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.372431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.372623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.372707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.372925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.372990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.373244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.373309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.373557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.373622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.373950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.374015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.374312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.374377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.374635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-06 19:26:34.374723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-06 19:26:34.374973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.375039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.375329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.375394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.375697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.375764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.376071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.376135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.376383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.376448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.376735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.376802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.377092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.377156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.377406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.377472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.377762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.378082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.378463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.378709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.378787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.379084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.379149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.379388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.379453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.379700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.379768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.379997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.380062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.380348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.380412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.380654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.380739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.380983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.381047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.381286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.381350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.381552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.381617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.381928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.382198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.382263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.382526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.382862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.382930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.383236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.383302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.383552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.383617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.383847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.383915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.384148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.384213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.384471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.384537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.384832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.384900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.385096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.385160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.385408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.385472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.385717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.385784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-06 19:26:34.386035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-06 19:26:34.386101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.386336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.386600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.386682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.386998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.387267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.387335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.387643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.387724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.387977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.388044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.388288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.388353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.388651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.388736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.389054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.389119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.389415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.389726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.389792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.390054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.390119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.390383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.390449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.390774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.391034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.391100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.391301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.391367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.391623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.391714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.391969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.392037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.392258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.392322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.392565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.392631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.392899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.392965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.393255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.393319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.393618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.393720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.394011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.394076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.394329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.394393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.394700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.394767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.394979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.395043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.395325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.395389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.395697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.395764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.396037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.396407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.396617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.396994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.397058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.397355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.397420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.397721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.397787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.398041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-06 19:26:34.398107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-06 19:26:34.398406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.398472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1231865 Killed "${NVMF_APP[@]}" "$@" 00:28:24.049 [2024-12-06 19:26:34.398723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.398789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.399049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.399114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.399303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:24.049 [2024-12-06 19:26:34.399369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.399571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.399637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:24.049 [2024-12-06 19:26:34.399893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.399958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.049 [2024-12-06 19:26:34.400256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.400320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.049 [2024-12-06 19:26:34.400589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.400653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.049 [2024-12-06 19:26:34.400951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.401017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.401314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.401377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.401579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.401646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.401972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.402328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.402543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.402608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.402881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.402947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.403199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.403264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.403515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.403581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.403846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.403912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.404178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.404244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.404454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.404520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.404755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.404823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.405021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.405088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.405367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.405433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.405694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.405760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-06 19:26:34.406045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.406110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1232414 00:28:24.049 [2024-12-06 19:26:34.406359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:24.049 [2024-12-06 19:26:34.406426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1232414 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1232414 ']' 00:28:24.049 [2024-12-06 19:26:34.406693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-06 19:26:34.406761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.049 [2024-12-06 19:26:34.406972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.049 [2024-12-06 19:26:34.407037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.050 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.050 [2024-12-06 19:26:34.407340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.407407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.407707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.407774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.408015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.408078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.408379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.408445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.408760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.408826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.409138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.409227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.409576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.409683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.410047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.410138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.410479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.410568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.410906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.410997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.411312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.411403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.411751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.411824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.412062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.412131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.412382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.412451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.412718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.412786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.412998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.413362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.413447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.413768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.413859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.414180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.414268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.414615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.414743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.415111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.415200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.415548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.415637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.415916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.416008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.416302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.416372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.416625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.416736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.417052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.417117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.417412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.417478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.417777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-06 19:26:34.417868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-06 19:26:34.418182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.418270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.418573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.418661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.419050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.419140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.419498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.419586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.419862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.419948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.420253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.420344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.420707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.421033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.421099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.421372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.421438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.421660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.421749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.422046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.422123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.422348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.422440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.422730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.422820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.423142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.423580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.423660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.423895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.423941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.424120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.424166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.424315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.424364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.424540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.424576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.424723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.424759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.424904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.424939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.425163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.425196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.425336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.425380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.425574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.425621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.425814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.425858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.425996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.426041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.426229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.426274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.426433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.426477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.426635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.426682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.426838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.426870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.427028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.427060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.427187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.427219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.427378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.427420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.427548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.427589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.427784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-06 19:26:34.427826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-06 19:26:34.428006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.428618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.428838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.428968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.428998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.429129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.429158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.429254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.429396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.429460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.429656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.429895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.429938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.430826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.430980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.431023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.431122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.431200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.431428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.431492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.431742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.431772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.431901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.431929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.432084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.432119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.432329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.432392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.432713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.432742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.432842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.432870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.433005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.433174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.433246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.433443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.433513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.433786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.433814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.433968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.434124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.434152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.434256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.434284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.434493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.434559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.434796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.052 [2024-12-06 19:26:34.434916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.052 [2024-12-06 19:26:34.434948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.052 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.435953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.436090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.436135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.436299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.436360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.436527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.436696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.436728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.436860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.436889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.437044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.437099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.437256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.437293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.437536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.437600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.437819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.438046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.438330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.438393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.438634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.438719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.438818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.438846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.438980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.439013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.439137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.439166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.439364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.439712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.439741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.439866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.439895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.440954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.440983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.441097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.441171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.441373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.441448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.441700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.441750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.441850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.441880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.441991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.053 [2024-12-06 19:26:34.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.053 qpair failed and we were unable to recover it. 00:28:24.053 [2024-12-06 19:26:34.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.442167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.442363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.442427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.442713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.442815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.442843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.442968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.442997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.443092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.443175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.443358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.443420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.443615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.443752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.443779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.443897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.443925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.444901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.444929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.445116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.445182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.445410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.445479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.445733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.445762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.445900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.445928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.446021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.446049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.446334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.446398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.446576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.446604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.446758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.446787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.446916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.447106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.447141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.447247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.447281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.447532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.447590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.447803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.447846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.447982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.448022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.448172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.448221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.448362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.448410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.448526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.448554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.448682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.054 [2024-12-06 19:26:34.448711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.054 qpair failed and we were unable to recover it. 00:28:24.054 [2024-12-06 19:26:34.448860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.448889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.449853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.449882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.450904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.450933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.451955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.451983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.452159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.452211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.452368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.452397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.452495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.452524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.452684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.452713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.452865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.452912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.453121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.453173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.453324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.453353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.055 qpair failed and we were unable to recover it. 00:28:24.055 [2024-12-06 19:26:34.453473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.055 [2024-12-06 19:26:34.453502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.453619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.453681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.453832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.454872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.454927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.455105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.455281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.455440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.455560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.455775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.455989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.456821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.456994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.457049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.457223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.457277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.457492] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:24.056 [2024-12-06 19:26:34.457523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.457574] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.056 [2024-12-06 19:26:34.457578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.457744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.457772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.457862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.457889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.458073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.458124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.458372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.458434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.458617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.458680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.458830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.458857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.458963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.458991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.459109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.459138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.459350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.459384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.459588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.459633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.459745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.056 [2024-12-06 19:26:34.459774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.056 qpair failed and we were unable to recover it. 00:28:24.056 [2024-12-06 19:26:34.459926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.459978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.460222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.460250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.460492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.460557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.460797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.460826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.460951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.460982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.461101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.461413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.461592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.461720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.461843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.461993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.462118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.462330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.462574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.462702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.462855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.462884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.463045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.463109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.463331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.463375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.463539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.463568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.463714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.463743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.463870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.463898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.464088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.464415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.464621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.464899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.464988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.465015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.465195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.465224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.465444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.465498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.465739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.465768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.465892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.466102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.466156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.466327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.466380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.466637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.466704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.057 [2024-12-06 19:26:34.466888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.057 qpair failed and we were unable to recover it. 00:28:24.057 [2024-12-06 19:26:34.467007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.467329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.467611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.467725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.467850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.467957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.467985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.468154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.468386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.468624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.468652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.468783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.468812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.468943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.469089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.469140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.469289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.469324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.469585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.469639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.469815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.469858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.470915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.470945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.471155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.471337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.471529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.471799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.471949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.472015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.472241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.472304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.472531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.472581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.472788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.472816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.472961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.473022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.473312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.473374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.473575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.473603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.473759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.473787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.473884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.058 [2024-12-06 19:26:34.473912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.058 qpair failed and we were unable to recover it. 00:28:24.058 [2024-12-06 19:26:34.474151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.474214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.474470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.474520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.474710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.474761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.474912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.474940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.475064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.475139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.475302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.475352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.475522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.475573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.475723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.475751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.475872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.475899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.476044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.476094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.476281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.476330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.476517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.476567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.476758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.476786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.476875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.476903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.477047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.477325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.477375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.477579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.477628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.477842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.477874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.477998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.478026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.478151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.478201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.478366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.478416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.478615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.478712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.478812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.478840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.478999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.479061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.479278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.479328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.479507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.479557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.479749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.479777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.479903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.479931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.480018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.480046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.480245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.480294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.480449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.480513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.480752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.480781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.480908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.480936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.481074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.059 [2024-12-06 19:26:34.481101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.059 qpair failed and we were unable to recover it. 00:28:24.059 [2024-12-06 19:26:34.481309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.481360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.481523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.481572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.481720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.481748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.481871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.481900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.482108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.482170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.482335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.482394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.482632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.482674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.482824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.482852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.482962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.482990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.483136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.483197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.483389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.483423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.483604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.483632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.483762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.483790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.483909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.483955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.484159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.484259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.484313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.484493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.484527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.484677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.484727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.484830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.484858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.485000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.485051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.485249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.485298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.485500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.485536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.485683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.485730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.485848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.485876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.486070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.486401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.486578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.486858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.486983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.487012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.060 qpair failed and we were unable to recover it. 00:28:24.060 [2024-12-06 19:26:34.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.060 [2024-12-06 19:26:34.487136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.487298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.487347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.487551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.487579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.487677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.487705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.487821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.487849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.487973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.488007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.488116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.488272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.488323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.488572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.488623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.488840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.488890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.489097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.489148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.489349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.489383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.489530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.489737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.489773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.489888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.489923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.490031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.490065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.490247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.490298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.490539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.490588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.490835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.490886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.491095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.491347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.491514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.491572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.491739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.491790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.491954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.492004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.492198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.492248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.492456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.492506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.492655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.492715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.492902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.492952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.493183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.493390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.493440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.493652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.493716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.493916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.493967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.494173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.494223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.494421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.494471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.061 qpair failed and we were unable to recover it. 00:28:24.061 [2024-12-06 19:26:34.494690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.061 [2024-12-06 19:26:34.494743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.494963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.495013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.495206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.495256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.495464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.495515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.495708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.495744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.495881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.495914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.496896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.496921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.497949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.498944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.498970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.062 qpair failed and we were unable to recover it. 00:28:24.062 [2024-12-06 19:26:34.499746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.062 [2024-12-06 19:26:34.499771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.499904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.499929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.500878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.501911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.501936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.502959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.502984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.503945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.503970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.504113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.504138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.063 [2024-12-06 19:26:34.504224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.063 [2024-12-06 19:26:34.504249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.063 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.504956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.504981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.505940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.505965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.506963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.506988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.507857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.507996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.508022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.508136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.508161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.508271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.508297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.508413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.508438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.064 [2024-12-06 19:26:34.508574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.064 qpair failed and we were unable to recover it. 00:28:24.064 [2024-12-06 19:26:34.508662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.508694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.508810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.508835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.508942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.508967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.509933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.509957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.510969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.510995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.511874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.511989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.065 [2024-12-06 19:26:34.512727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.065 [2024-12-06 19:26:34.512753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.065 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.512865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.512890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.512987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.513867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.513892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.514957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.514982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.515919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.515943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.066 [2024-12-06 19:26:34.516885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.066 qpair failed and we were unable to recover it. 00:28:24.066 [2024-12-06 19:26:34.516986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.517901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.517984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.518913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.518937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.519947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.519972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.520974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.520998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.521074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.521099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.521182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.521206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.521309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.521349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.521468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.067 qpair failed and we were unable to recover it. 00:28:24.067 [2024-12-06 19:26:34.521585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.067 [2024-12-06 19:26:34.521612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.521732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.521760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.521844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.521871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.521985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.522976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.523871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.523896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.524888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.524916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.525990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.526017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.526132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.526159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.526246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.068 [2024-12-06 19:26:34.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.068 qpair failed and we were unable to recover it. 00:28:24.068 [2024-12-06 19:26:34.526380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.526486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.526512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.526628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.526652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.526782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.526808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.526957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.526987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.527909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.527935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.528902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.528927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.529845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.529989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.069 [2024-12-06 19:26:34.530874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.069 [2024-12-06 19:26:34.530899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.069 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.531919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.531944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.532951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.532976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.533967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.533993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.070 qpair failed and we were unable to recover it. 00:28:24.070 [2024-12-06 19:26:34.534937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.070 [2024-12-06 19:26:34.534962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.535975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.535983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.071 [2024-12-06 19:26:34.536002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.536940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.536967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.537924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.537949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.538967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.538992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.539134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.539158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.071 qpair failed and we were unable to recover it. 00:28:24.071 [2024-12-06 19:26:34.539273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.071 [2024-12-06 19:26:34.539298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.539416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.539441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.539528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.539691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.539717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.539839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.539864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.539949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.539974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.540963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.540994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.541919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.541946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.542877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.542978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.072 [2024-12-06 19:26:34.543821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.072 qpair failed and we were unable to recover it. 00:28:24.072 [2024-12-06 19:26:34.543960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.544942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.544970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.545862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.545887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.546918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.546945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.547861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.547982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.073 qpair failed and we were unable to recover it. 00:28:24.073 [2024-12-06 19:26:34.548721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.073 [2024-12-06 19:26:34.548747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.548829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.548854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.548926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.548951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.549834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.549980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.550922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.550948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.551942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.551971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.552948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.552974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.553088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.553115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.553203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.553232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.553348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.074 [2024-12-06 19:26:34.553375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.074 qpair failed and we were unable to recover it. 00:28:24.074 [2024-12-06 19:26:34.553521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.553548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.553662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.553698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.553799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.553826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.553941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.553967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.554946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.554972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.555893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.556872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.556979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.557005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.557084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.557111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.557212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.557238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.557353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.557378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.557467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.075 [2024-12-06 19:26:34.557492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.075 qpair failed and we were unable to recover it. 00:28:24.075 [2024-12-06 19:26:34.557604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.557628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.557762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.557789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.557867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.557891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.557977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.558894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.558923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.559897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.559921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.560970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.560997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.076 [2024-12-06 19:26:34.561881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.076 [2024-12-06 19:26:34.561906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.076 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.562870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.562896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.563979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.564875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.564900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.565901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.565929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.566044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.566070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.566185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.566306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.566332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.566448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.077 [2024-12-06 19:26:34.566473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.077 qpair failed and we were unable to recover it. 00:28:24.077 [2024-12-06 19:26:34.566558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.566582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.566692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.566806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.566831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.566940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.566965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.567860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.567976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.568900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.568925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.569966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.569992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.570918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.570942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.571021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.078 [2024-12-06 19:26:34.571047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.078 qpair failed and we were unable to recover it. 00:28:24.078 [2024-12-06 19:26:34.571128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.571959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.571985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.572972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.573954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.573979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.574963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.574988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.575129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.575292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.575426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.575574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.079 [2024-12-06 19:26:34.575694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.079 qpair failed and we were unable to recover it. 00:28:24.079 [2024-12-06 19:26:34.575773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.575798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.575916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.575942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.576874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.576989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.577930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.577956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.578967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.578994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.579944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.579976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.580053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.580078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.580165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.080 [2024-12-06 19:26:34.580190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.080 qpair failed and we were unable to recover it. 00:28:24.080 [2024-12-06 19:26:34.580304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.580330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.580440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.580473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.580583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.580610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.580704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.580732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.580878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.580904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.581906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.581933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.582927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.582954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.583921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.583948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.081 [2024-12-06 19:26:34.584884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.081 [2024-12-06 19:26:34.584909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.081 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.585903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.585928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.586943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.587964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.587990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.588893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.588919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.589041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.589067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.082 [2024-12-06 19:26:34.589185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.082 [2024-12-06 19:26:34.589211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.082 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.589326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.589351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.589466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.589493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.589582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.589608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.589713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.589752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.589843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.589869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.590906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.590996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.591980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.592921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.592959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.083 [2024-12-06 19:26:34.593829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.083 qpair failed and we were unable to recover it. 00:28:24.083 [2024-12-06 19:26:34.593949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.593985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.594876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.084 [2024-12-06 19:26:34.595653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.084 [2024-12-06 19:26:34.595686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.084 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.595780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.595806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.595894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.595921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.596044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.596083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.596171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.596197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.596318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.596346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.368 [2024-12-06 19:26:34.596467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.368 [2024-12-06 19:26:34.596492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.368 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.596570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.596595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.596711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.596743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.596819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.596844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.596925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.596949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.597833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.369 [2024-12-06 19:26:34.597914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-12-06 19:26:34.597914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 at runtime. 00:28:24.369 [2024-12-06 19:26:34.597934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.369 [2024-12-06 19:26:34.597939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 [2024-12-06 19:26:34.597947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.597963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.369 [2024-12-06 19:26:34.598070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.598980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:24.369 [2024-12-06 19:26:34.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.599685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:24.369 [2024-12-06 19:26:34.599800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.599710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:24.369 [2024-12-06 19:26:34.599713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:24.369 [2024-12-06 19:26:34.600071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.369 [2024-12-06 19:26:34.600866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.369 qpair failed and we were unable to recover it. 00:28:24.369 [2024-12-06 19:26:34.600982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.601875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.601989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.602863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.602890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.603932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.603959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.604965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.604992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.370 [2024-12-06 19:26:34.605790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.370 qpair failed and we were unable to recover it. 00:28:24.370 [2024-12-06 19:26:34.605867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.605892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.606944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.606977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.607911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.607997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.608988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.609898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.609923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.610088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.610198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.610334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.610441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.371 [2024-12-06 19:26:34.610545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.371 qpair failed and we were unable to recover it. 00:28:24.371 [2024-12-06 19:26:34.610680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.610709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.610799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.610826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.610943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.610978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.611931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.611956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.612869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.612991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.613863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.613982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.614881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.615031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.615058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.615142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.615280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.372 [2024-12-06 19:26:34.615305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.372 qpair failed and we were unable to recover it. 00:28:24.372 [2024-12-06 19:26:34.615448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.615476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.615597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.615624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.615725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.615753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.615839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.615866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.615960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.615987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.616961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.616987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.617849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.617874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.618971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.618997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.619096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.619123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.619206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.619231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.619372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.619397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.619491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.373 [2024-12-06 19:26:34.619518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.373 qpair failed and we were unable to recover it. 00:28:24.373 [2024-12-06 19:26:34.619628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.619653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.619745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.619771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.619849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.619874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.619957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.619982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.620856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.620882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.621918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.621998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.622907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.622934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.623917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.623942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.624030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.624056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.624174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.624199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.624308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.624332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.374 [2024-12-06 19:26:34.624450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.374 [2024-12-06 19:26:34.624477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.374 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.624572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.624600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.624691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.624717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.624802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.624828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.624914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.624950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.625895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.625995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.626903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.627883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.627910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.628968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.628993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.629064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.375 [2024-12-06 19:26:34.629088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.375 qpair failed and we were unable to recover it. 00:28:24.375 [2024-12-06 19:26:34.629173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.629900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.629989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.630892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.630917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.631948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.631974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.632909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.632936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.376 [2024-12-06 19:26:34.633688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.376 [2024-12-06 19:26:34.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.376 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.633794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.633892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.633917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.634890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.634917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.635910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.635937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.636913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.636940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.377 qpair failed and we were unable to recover it. 00:28:24.377 [2024-12-06 19:26:34.637857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.377 [2024-12-06 19:26:34.637884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.638904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.638931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.639911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.639937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.640884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.640988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.641970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.642122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.642258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.642401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.642535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.378 [2024-12-06 19:26:34.642720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.378 qpair failed and we were unable to recover it. 00:28:24.378 [2024-12-06 19:26:34.642811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.642838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.642921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.642960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.643991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.644880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.644979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.645954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.645981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.646853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.647089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.647236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.647345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.647482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.379 qpair failed and we were unable to recover it. 00:28:24.379 [2024-12-06 19:26:34.647637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.379 [2024-12-06 19:26:34.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.647805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.647833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.647925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.647957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.648871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.648897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.649970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.649996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.650849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.650982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.651872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.380 [2024-12-06 19:26:34.652783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.380 [2024-12-06 19:26:34.652810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.380 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.652891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.652917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.653905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.653933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.654940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.654966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.655931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.655958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.656895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.656923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.657072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.657098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.657184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.657209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.657290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.657317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.381 [2024-12-06 19:26:34.657400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.381 [2024-12-06 19:26:34.657425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.381 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.657540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.657567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.657679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.657706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.657814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.657840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.657929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.657957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.658877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.658965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.659903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.659928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.660888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.660987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.382 [2024-12-06 19:26:34.661859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.382 [2024-12-06 19:26:34.661884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.382 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.661981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.662940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.662975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.663873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.663973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.664937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.664961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.665904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.665937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.666075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.666184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.666294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.666433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.383 [2024-12-06 19:26:34.666536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.383 qpair failed and we were unable to recover it. 00:28:24.383 [2024-12-06 19:26:34.666617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.666641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.666753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.666856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.666881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.666977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.667887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.667914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.668920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.668953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.669939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.669965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.670938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.670974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.671061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.671087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.671169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.671194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.384 qpair failed and we were unable to recover it. 00:28:24.384 [2024-12-06 19:26:34.671279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.384 [2024-12-06 19:26:34.671304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.671960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.671985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.672845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.672873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.673905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.673930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.674972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.674996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.675974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.385 [2024-12-06 19:26:34.675999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.385 qpair failed and we were unable to recover it. 00:28:24.385 [2024-12-06 19:26:34.676080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.676951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.676983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.677861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.677887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.678904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.678928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.679947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.679974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.386 [2024-12-06 19:26:34.680831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.386 [2024-12-06 19:26:34.680855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.386 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.680941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.680966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.681843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.681877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.682936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.682961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.683937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.683962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.387 [2024-12-06 19:26:34.684889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.387 qpair failed and we were unable to recover it. 00:28:24.387 [2024-12-06 19:26:34.684976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.685933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.685958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.686886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.687880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.687913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.688888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.689030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.689135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.689256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.388 [2024-12-06 19:26:34.689508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.388 qpair failed and we were unable to recover it. 00:28:24.388 [2024-12-06 19:26:34.689617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.689643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.689770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.689811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.689903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.689930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.690928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.690952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.691918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.692923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.692998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.693919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.693950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.694039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.694066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.694164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.694191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.389 [2024-12-06 19:26:34.694275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.389 [2024-12-06 19:26:34.694300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.389 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.694435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.694536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.694647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.694787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.694894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.694989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.695944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.695970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.696953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.696978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.697880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.697909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.390 [2024-12-06 19:26:34.698870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.390 qpair failed and we were unable to recover it. 00:28:24.390 [2024-12-06 19:26:34.698947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.698977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.699907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.699931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.700865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.700890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.701902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.701927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.391 [2024-12-06 19:26:34.702974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.391 qpair failed and we were unable to recover it. 00:28:24.391 [2024-12-06 19:26:34.703085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.703905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.703990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.704916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.705916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.705994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.706929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.706955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.392 qpair failed and we were unable to recover it. 00:28:24.392 [2024-12-06 19:26:34.707628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.392 [2024-12-06 19:26:34.707652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.707743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.707768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.707840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.707864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.707944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.707969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.708948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.709935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.709959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.710921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.710947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.711893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.711923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.712012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.712038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.712108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.712133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.393 [2024-12-06 19:26:34.712215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.393 [2024-12-06 19:26:34.712239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.393 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.712931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.712956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.713908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.714949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.714974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.715919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.715999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.394 [2024-12-06 19:26:34.716758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.394 qpair failed and we were unable to recover it. 00:28:24.394 [2024-12-06 19:26:34.716844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.716869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.716952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.716976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.717872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.717977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.718917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.718941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.719886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.719998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.720913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.720939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.721023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.721048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.721126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.721151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.395 [2024-12-06 19:26:34.721234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.395 [2024-12-06 19:26:34.721262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.395 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.721369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.721394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.721517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.721544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.721636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.721662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.721799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.721826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.721916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.721941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.722941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.722966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.723941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.723971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.724898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.724978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.725003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.725091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.725117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.725197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.725221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.396 qpair failed and we were unable to recover it. 00:28:24.396 [2024-12-06 19:26:34.725304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.396 [2024-12-06 19:26:34.725327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.725940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.725964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.726928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.726951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.727892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.727918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.728904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.728934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.397 [2024-12-06 19:26:34.729920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.397 [2024-12-06 19:26:34.729944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.397 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.730922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.730993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.731127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.398 [2024-12-06 19:26:34.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.731357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.731469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:24.398 [2024-12-06 19:26:34.731570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.731702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.731812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.398 [2024-12-06 19:26:34.731928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.731952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.398 [2024-12-06 19:26:34.732263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.398 [2024-12-06 19:26:34.732475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.732974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.732998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.733955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.733980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.734060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.734086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.734176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.398 [2024-12-06 19:26:34.734200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.398 qpair failed and we were unable to recover it. 00:28:24.398 [2024-12-06 19:26:34.734288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.734424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.734531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.734643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.734784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.734919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.734945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.735881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.735993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.736968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.736994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.737887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.737979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.399 qpair failed and we were unable to recover it. 00:28:24.399 [2024-12-06 19:26:34.738871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.399 [2024-12-06 19:26:34.738895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.739952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.739976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.740940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.740969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.741918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.741943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.742906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.742986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.400 [2024-12-06 19:26:34.743013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.400 qpair failed and we were unable to recover it. 00:28:24.400 [2024-12-06 19:26:34.743121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.743921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.743999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.744942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.744968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.745962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.745986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.746904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.746928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.747015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.747129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.747154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.747233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.747258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.747336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.747362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.401 [2024-12-06 19:26:34.747445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.401 [2024-12-06 19:26:34.747470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.401 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.747572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.747598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.747704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.747729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.747835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.747920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.747945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.748878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.748904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.749936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.749962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.750871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.750977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.751887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.751976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-06 19:26:34.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.402 qpair failed and we were unable to recover it. 00:28:24.402 [2024-12-06 19:26:34.752085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.752954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.753889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.753916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.754915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.754999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.755110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.755220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.403 [2024-12-06 19:26:34.755358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.755458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.755573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.403 [2024-12-06 19:26:34.755722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.755841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.403 [2024-12-06 19:26:34.755947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.755974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.403 [2024-12-06 19:26:34.756114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.756143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.756233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.756258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.756386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.756413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.756528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.403 [2024-12-06 19:26:34.756558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.403 qpair failed and we were unable to recover it. 00:28:24.403 [2024-12-06 19:26:34.756646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.756687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.756773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.756799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.756887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.756913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.756994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.757930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.757961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.758958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.758982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.760924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.760949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.761041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.761066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.761150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.404 [2024-12-06 19:26:34.761180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.404 qpair failed and we were unable to recover it. 00:28:24.404 [2024-12-06 19:26:34.761272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.761388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.761554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.761699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.761801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.761925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.761952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.762911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.762937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.763935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.763961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.764908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.764933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.765053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.765163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.765244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.765269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.405 [2024-12-06 19:26:34.765382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.405 [2024-12-06 19:26:34.765407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.405 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.765508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.765534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.765629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.765654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.765752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.765777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.765855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.765879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.765955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.765980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.766934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.766967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.767936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.767961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.768892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.768920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.769929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.406 [2024-12-06 19:26:34.769956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.406 qpair failed and we were unable to recover it. 00:28:24.406 [2024-12-06 19:26:34.770084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.770893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.770922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.771905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.771987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.772904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.772984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.773886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.773976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.774081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.774107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.774187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.774212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.774324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.774349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.407 [2024-12-06 19:26:34.774431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.407 [2024-12-06 19:26:34.774455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.407 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.774554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.774630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.774655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.774752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.774776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.774859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.774885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.774953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.774978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.775914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.775998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.776929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.776954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.777883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.777976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.408 [2024-12-06 19:26:34.778908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.408 qpair failed and we were unable to recover it. 00:28:24.408 [2024-12-06 19:26:34.778993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.779882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.779992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.780917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.780942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.781929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.781956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.782973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.782998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.783080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.409 [2024-12-06 19:26:34.783104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.409 qpair failed and we were unable to recover it. 00:28:24.409 [2024-12-06 19:26:34.783196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.783938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.783964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.784948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.784973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.785957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.785982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.786922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.786947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.410 [2024-12-06 19:26:34.787815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-12-06 19:26:34.787898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.787924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82c8000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.788880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.788905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.789938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.789962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.790920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.790947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.791875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.791900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-12-06 19:26:34.792806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.411 [2024-12-06 19:26:34.792830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.792919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.792944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 Malloc0 00:28:24.412 [2024-12-06 19:26:34.793577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.793913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.793938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.412 [2024-12-06 19:26:34.794164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:24.412 [2024-12-06 19:26:34.794530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.412 [2024-12-06 19:26:34.794851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.794958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.794984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.412 [2024-12-06 19:26:34.795080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.795940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.795964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.796891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.796916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.797027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.797051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.797131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.797156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.797236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.797261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.797344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.797373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.412 qpair failed and we were unable to recover it. 00:28:24.412 [2024-12-06 19:26:34.797494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.412 [2024-12-06 19:26:34.797524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.797643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.797680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.797766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.797884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.797909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.798913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.798938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.799918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.799949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.800903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.800928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.413 [2024-12-06 19:26:34.801257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.801903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.801927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.802011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.802036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.802122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.802224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.413 [2024-12-06 19:26:34.802249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.413 qpair failed and we were unable to recover it. 00:28:24.413 [2024-12-06 19:26:34.802330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.802592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.802705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.802816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.802923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.802954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.804948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.804972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.414 [2024-12-06 19:26:34.805698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.414 qpair failed and we were unable to recover it. 00:28:24.414 [2024-12-06 19:26:34.805785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.805810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.805894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.806863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.806981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.807964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.808946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.808970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.809045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.809069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.809150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.809178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.809268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.809295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.415 [2024-12-06 19:26:34.809409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.415 qpair failed and we were unable to recover it. 00:28:24.415 [2024-12-06 19:26:34.809526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.809552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.809639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.416 [2024-12-06 19:26:34.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.809749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.809774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.416 [2024-12-06 19:26:34.809858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.809886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.809974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.416 [2024-12-06 19:26:34.809999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.416 [2024-12-06 19:26:34.810113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.810957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.810982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.811972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.811996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.416 [2024-12-06 19:26:34.812916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.416 [2024-12-06 19:26:34.812940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.416 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.813967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.813994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.814968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.814994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.815916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.815948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.816066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.816147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.816173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.816261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.816287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.417 [2024-12-06 19:26:34.816369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.417 [2024-12-06 19:26:34.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.417 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.816496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.816603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.816628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.816715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.816741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.816816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.816841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.816928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.816954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.817031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.817145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.817280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.817396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.418 [2024-12-06 19:26:34.817508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.418 [2024-12-06 19:26:34.817628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.418 [2024-12-06 19:26:34.817745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.418 [2024-12-06 19:26:34.817858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.817974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.817999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.818943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.818970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.819105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.819208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.819381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.819496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.418 [2024-12-06 19:26:34.819638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.418 qpair failed and we were unable to recover it. 00:28:24.418 [2024-12-06 19:26:34.819752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.819780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.819865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.819891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.820973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.820996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.821902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.821928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.419 [2024-12-06 19:26:34.822582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.419 qpair failed and we were unable to recover it. 00:28:24.419 [2024-12-06 19:26:34.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.822700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.822776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.822799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.822879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.822902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.822988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.823900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.823923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.824922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.824946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.825022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.825126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.825236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.825337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 [2024-12-06 19:26:34.825443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.420 [2024-12-06 19:26:34.825548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.420 [2024-12-06 19:26:34.825572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.420 qpair failed and we were unable to recover it. 00:28:24.420 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.420 [2024-12-06 19:26:34.825658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.825694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.421 [2024-12-06 19:26:34.825794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.825876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.825901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.825994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.826895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.826979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.827957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.827982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bcfa0 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82cc000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.828884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.829042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.829069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.829171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.829197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.829281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.421 [2024-12-06 19:26:34.829312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82d4000b90 with addr=10.0.0.2, port=4420 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 [2024-12-06 19:26:34.829478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.421 [2024-12-06 19:26:34.832123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.421 [2024-12-06 19:26:34.832250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.421 [2024-12-06 19:26:34.832280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.421 [2024-12-06 19:26:34.832305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.421 [2024-12-06 19:26:34.832330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.421 [2024-12-06 19:26:34.832379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.421 qpair failed and we were unable to recover it. 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.421 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.422 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.422 19:26:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1231899 00:28:24.422 [2024-12-06 19:26:34.841901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.842014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.842050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.842075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.842098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.842140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.852004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.852102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.852137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.852163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.852186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.852228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.862019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.862128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.862160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.862184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.862207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.862250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.871885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.871987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.872020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.872044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.872067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.872123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.881906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.882038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.882066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.882089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.882112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.882167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.891913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.892049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.892077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.892100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.892123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.892165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.901980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.902086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.902116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.902147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.902170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.902226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.422 [2024-12-06 19:26:34.912012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.422 [2024-12-06 19:26:34.912110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.422 [2024-12-06 19:26:34.912142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.422 [2024-12-06 19:26:34.912165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.422 [2024-12-06 19:26:34.912187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.422 [2024-12-06 19:26:34.912232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.422 qpair failed and we were unable to recover it. 00:28:24.689 [2024-12-06 19:26:34.922075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.689 [2024-12-06 19:26:34.922181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.689 [2024-12-06 19:26:34.922211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.689 [2024-12-06 19:26:34.922236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.689 [2024-12-06 19:26:34.922259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.689 [2024-12-06 19:26:34.922301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.689 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.932044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.932132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.932165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.932190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.932214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.932257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.942047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.942141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.942173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.942198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.942221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.942271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.952113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.952212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.952243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.952268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.952292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.952335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.962115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.962213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.962244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.962269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.962291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.962333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.972149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.972241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.972273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.972298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.972319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.972362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.982305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.982436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.982463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.982487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.982510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.982552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:34.992181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:34.992272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.690 [2024-12-06 19:26:34.992307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.690 [2024-12-06 19:26:34.992334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.690 [2024-12-06 19:26:34.992354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.690 [2024-12-06 19:26:34.992397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.690 qpair failed and we were unable to recover it. 00:28:24.690 [2024-12-06 19:26:35.002243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.690 [2024-12-06 19:26:35.002339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.002371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.002396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.002419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.002461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.012258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.012351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.012383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.012406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.012429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.012471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.022360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.022458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.022490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.022514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.022536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.022595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.032311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.032430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.032464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.032488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.032511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.032554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.042345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.042443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.042475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.042499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.042522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.042564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.052438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.052528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.052562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.052587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.052610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.052652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.062416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.062515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.062550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.062575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.062597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.062640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.072443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.072533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.072566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.072590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.072619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.072670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.082442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.082567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.082593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.082616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.082640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.082689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.092465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.092548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.092581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.092607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.092628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.092678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.102505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.102602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.102634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.102658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.102690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.102734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.112544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.112642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.112685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.112711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.112733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.112775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.122561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.122679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.122708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.122732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.122755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.122798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.132593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.132699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.132746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.132769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.132792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.132835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.142675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.142823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.142850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.142873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.142896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.142939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.152641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.152742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.152772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.152796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.152819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.152862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.162658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.162755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.162793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.162817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.162839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.162882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.172721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.172820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.172852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.172878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.172901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.172943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.182741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.182838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.182870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.182895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.182917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.182960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.192791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.192891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.192923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.192948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.192970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.193013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.202808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.202900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.202932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.691 [2024-12-06 19:26:35.202957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.691 [2024-12-06 19:26:35.202986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.691 [2024-12-06 19:26:35.203029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.691 qpair failed and we were unable to recover it. 00:28:24.691 [2024-12-06 19:26:35.212807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.691 [2024-12-06 19:26:35.212903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.691 [2024-12-06 19:26:35.212936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.692 [2024-12-06 19:26:35.212961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.692 [2024-12-06 19:26:35.212984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.692 [2024-12-06 19:26:35.213026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.692 qpair failed and we were unable to recover it. 00:28:24.692 [2024-12-06 19:26:35.222877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.692 [2024-12-06 19:26:35.222978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.692 [2024-12-06 19:26:35.223008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.692 [2024-12-06 19:26:35.223032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.692 [2024-12-06 19:26:35.223055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.692 [2024-12-06 19:26:35.223097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.692 qpair failed and we were unable to recover it. 00:28:24.692 [2024-12-06 19:26:35.232909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.692 [2024-12-06 19:26:35.232993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.692 [2024-12-06 19:26:35.233027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.692 [2024-12-06 19:26:35.233053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.692 [2024-12-06 19:26:35.233075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.692 [2024-12-06 19:26:35.233117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.692 qpair failed and we were unable to recover it. 00:28:24.692 [2024-12-06 19:26:35.242997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.692 [2024-12-06 19:26:35.243096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.692 [2024-12-06 19:26:35.243127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.692 [2024-12-06 19:26:35.243151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.692 [2024-12-06 19:26:35.243174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.692 [2024-12-06 19:26:35.243233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.692 qpair failed and we were unable to recover it. 00:28:24.692 [2024-12-06 19:26:35.252985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.692 [2024-12-06 19:26:35.253079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.692 [2024-12-06 19:26:35.253111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.692 [2024-12-06 19:26:35.253135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.692 [2024-12-06 19:26:35.253158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.692 [2024-12-06 19:26:35.253200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.692 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.263028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.263130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.263162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.263186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.263209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.263250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.273104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.273198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.273230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.273255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.273278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.273319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.283037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.283141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.283170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.283194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.283216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.283259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.293086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.293212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.293238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.293262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.293285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.293343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.303108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.303208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.303240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.303264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.303287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.303329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.313240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.313347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.313394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.313417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.313439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.313496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.323153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.323249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.323281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.323306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.323329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.323372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.333200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.333297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.333328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.333360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.333382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.333424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.343224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.959 [2024-12-06 19:26:35.343323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.959 [2024-12-06 19:26:35.343355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.959 [2024-12-06 19:26:35.343379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.959 [2024-12-06 19:26:35.343401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.959 [2024-12-06 19:26:35.343442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-06 19:26:35.353280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.353370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.353404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.353429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.353451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.353507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.363304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.363400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.363432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.363457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.363479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.363520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.373364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.373486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.373516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.373540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.373563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.373626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.383379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.383488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.383519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.383544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.383567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.383610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.393388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.393504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.393537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.393562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.393585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.393642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.403408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.403503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.403536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.403562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.403584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.403626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.413422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.413509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.413541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.413564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.413584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.413626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.423463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.423578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.423607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.423630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.423652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.423711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.433493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.433584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.433616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.433640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.433662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.433719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.443517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.443603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.443638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.443672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.443698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.443741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.453513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.453611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.453642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.453674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.453700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.453743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.463661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.463785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.463817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.463852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.463874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.463918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.473635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.473743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.473778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.473803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.473826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.473867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.483609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.483716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.483748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.960 [2024-12-06 19:26:35.483773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.960 [2024-12-06 19:26:35.483796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.960 [2024-12-06 19:26:35.483838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-06 19:26:35.493647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.960 [2024-12-06 19:26:35.493753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.960 [2024-12-06 19:26:35.493786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.961 [2024-12-06 19:26:35.493811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.961 [2024-12-06 19:26:35.493833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.961 [2024-12-06 19:26:35.493875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-06 19:26:35.503772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.961 [2024-12-06 19:26:35.503874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.961 [2024-12-06 19:26:35.503906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.961 [2024-12-06 19:26:35.503930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.961 [2024-12-06 19:26:35.503952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.961 [2024-12-06 19:26:35.504003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-06 19:26:35.513711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.961 [2024-12-06 19:26:35.513803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.961 [2024-12-06 19:26:35.513835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.961 [2024-12-06 19:26:35.513860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.961 [2024-12-06 19:26:35.513883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.961 [2024-12-06 19:26:35.513925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-06 19:26:35.523731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.961 [2024-12-06 19:26:35.523830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.961 [2024-12-06 19:26:35.523861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.961 [2024-12-06 19:26:35.523886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.961 [2024-12-06 19:26:35.523908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.961 [2024-12-06 19:26:35.523950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-06 19:26:35.533762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.961 [2024-12-06 19:26:35.533858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.961 [2024-12-06 19:26:35.533889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.961 [2024-12-06 19:26:35.533914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.961 [2024-12-06 19:26:35.533936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:24.961 [2024-12-06 19:26:35.533978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.961 qpair failed and we were unable to recover it. 00:28:25.218 [2024-12-06 19:26:35.543812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.218 [2024-12-06 19:26:35.543914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.218 [2024-12-06 19:26:35.543946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.218 [2024-12-06 19:26:35.543970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.218 [2024-12-06 19:26:35.543992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.218 [2024-12-06 19:26:35.544033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.218 qpair failed and we were unable to recover it. 00:28:25.218 [2024-12-06 19:26:35.553824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.218 [2024-12-06 19:26:35.553932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.218 [2024-12-06 19:26:35.553964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.218 [2024-12-06 19:26:35.553988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.554011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.554053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.563849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.563952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.563985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.564010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.564033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.564074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.573892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.573984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.574017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.574043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.574065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.574107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.583937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.584034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.584066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.584093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.584114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.584170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.593991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.594116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.594148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.594172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.594195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.594237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.603984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.604080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.604111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.604134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.604157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.604199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.614219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.614326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.614357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.614386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.614409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.614465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.624103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.624241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.624268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.624290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.624314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.624356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 [2024-12-06 19:26:35.634216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.219 [2024-12-06 19:26:35.634329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.219 [2024-12-06 19:26:35.634395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.219 [2024-12-06 19:26:35.634421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.219 [2024-12-06 19:26:35.634464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82d4000b90 00:28:25.219 [2024-12-06 19:26:35.634540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:25.219 qpair failed and we were unable to recover it. 00:28:25.219 A controller has encountered a failure and is being reset. 00:28:25.219 Controller properly reset. 00:28:30.474 Initializing NVMe Controllers 00:28:30.474 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:30.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:30.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:30.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:30.474 Initialization complete. Launching workers. 00:28:30.474 Starting thread on core 1 00:28:30.474 Starting thread on core 2 00:28:30.474 Starting thread on core 3 00:28:30.474 Starting thread on core 0 00:28:30.474 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:30.474 00:28:30.474 real 0m10.786s 00:28:30.474 user 0m32.485s 00:28:30.474 sys 0m6.416s 00:28:30.474 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.474 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.474 ************************************ 00:28:30.474 END TEST nvmf_target_disconnect_tc2 00:28:30.474 ************************************ 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.475 rmmod nvme_tcp 00:28:30.475 rmmod nvme_fabrics 00:28:30.475 rmmod nvme_keyring 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1232414 ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1232414 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1232414 ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1232414 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232414 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232414' 00:28:30.475 killing process with pid 1232414 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1232414 00:28:30.475 19:26:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1232414 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.733 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.734 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.734 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.734 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.673 00:28:32.673 real 0m15.773s 00:28:32.673 user 0m58.252s 00:28:32.673 sys 0m9.025s 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:32.673 ************************************ 00:28:32.673 END TEST nvmf_target_disconnect 00:28:32.673 ************************************ 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:32.673 00:28:32.673 real 5m6.397s 00:28:32.673 user 11m5.776s 00:28:32.673 sys 1m15.189s 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.673 19:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.673 ************************************ 00:28:32.673 END TEST nvmf_host 00:28:32.673 ************************************ 00:28:32.673 19:26:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:32.673 19:26:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:32.673 19:26:43 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:32.673 19:26:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:32.673 19:26:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.673 19:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.673 ************************************ 00:28:32.673 START TEST nvmf_target_core_interrupt_mode 00:28:32.673 ************************************ 00:28:32.673 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:32.965 * Looking for test storage... 00:28:32.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:32.965 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:32.965 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:32.965 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:32.965 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:32.965 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.966 --rc genhtml_branch_coverage=1 00:28:32.966 --rc genhtml_function_coverage=1 00:28:32.966 --rc genhtml_legend=1 00:28:32.966 --rc geninfo_all_blocks=1 00:28:32.966 --rc geninfo_unexecuted_blocks=1 00:28:32.966 00:28:32.966 ' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.966 --rc genhtml_branch_coverage=1 00:28:32.966 --rc genhtml_function_coverage=1 00:28:32.966 --rc genhtml_legend=1 00:28:32.966 --rc geninfo_all_blocks=1 00:28:32.966 --rc geninfo_unexecuted_blocks=1 00:28:32.966 00:28:32.966 ' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.966 --rc genhtml_branch_coverage=1 00:28:32.966 --rc genhtml_function_coverage=1 00:28:32.966 --rc genhtml_legend=1 00:28:32.966 --rc geninfo_all_blocks=1 00:28:32.966 --rc geninfo_unexecuted_blocks=1 00:28:32.966 00:28:32.966 ' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.966 --rc genhtml_branch_coverage=1 00:28:32.966 --rc genhtml_function_coverage=1 00:28:32.966 --rc genhtml_legend=1 00:28:32.966 --rc geninfo_all_blocks=1 00:28:32.966 --rc geninfo_unexecuted_blocks=1 00:28:32.966 00:28:32.966 ' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.966 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:32.966 ************************************ 00:28:32.966 START TEST nvmf_abort 00:28:32.966 ************************************ 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:32.967 * Looking for test storage... 00:28:32.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:32.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.967 --rc genhtml_branch_coverage=1 00:28:32.967 --rc genhtml_function_coverage=1 00:28:32.967 --rc genhtml_legend=1 00:28:32.967 --rc geninfo_all_blocks=1 00:28:32.967 --rc geninfo_unexecuted_blocks=1 00:28:32.967 00:28:32.967 ' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:32.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.967 --rc genhtml_branch_coverage=1 00:28:32.967 --rc genhtml_function_coverage=1 00:28:32.967 --rc genhtml_legend=1 00:28:32.967 --rc geninfo_all_blocks=1 00:28:32.967 --rc geninfo_unexecuted_blocks=1 00:28:32.967 00:28:32.967 ' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:32.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.967 --rc genhtml_branch_coverage=1 00:28:32.967 --rc genhtml_function_coverage=1 00:28:32.967 --rc genhtml_legend=1 00:28:32.967 --rc geninfo_all_blocks=1 00:28:32.967 --rc geninfo_unexecuted_blocks=1 00:28:32.967 00:28:32.967 ' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:32.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.967 --rc genhtml_branch_coverage=1 00:28:32.967 --rc genhtml_function_coverage=1 00:28:32.967 --rc genhtml_legend=1 00:28:32.967 --rc geninfo_all_blocks=1 00:28:32.967 --rc geninfo_unexecuted_blocks=1 00:28:32.967 00:28:32.967 ' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.967 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:32.968 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.227 19:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.134 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.134 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:35.134 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:28:35.393 00:28:35.393 --- 10.0.0.2 ping statistics --- 00:28:35.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.393 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:35.393 00:28:35.393 --- 10.0.0.1 ping statistics --- 00:28:35.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.393 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1235239 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1235239 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1235239 ']' 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.393 19:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.393 [2024-12-06 19:26:45.819049] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:35.393 [2024-12-06 19:26:45.820217] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:35.393 [2024-12-06 19:26:45.820281] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.393 [2024-12-06 19:26:45.907979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:35.652 [2024-12-06 19:26:45.986193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.652 [2024-12-06 19:26:45.986258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.652 [2024-12-06 19:26:45.986299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.652 [2024-12-06 19:26:45.986321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.652 [2024-12-06 19:26:45.986340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.652 [2024-12-06 19:26:45.988264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.652 [2024-12-06 19:26:45.988333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.652 [2024-12-06 19:26:45.988343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.652 [2024-12-06 19:26:46.084102] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:35.652 [2024-12-06 19:26:46.084318] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:35.653 [2024-12-06 19:26:46.084350] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:35.653 [2024-12-06 19:26:46.084574] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 [2024-12-06 19:26:46.133238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 Malloc0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 Delay0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 [2024-12-06 19:26:46.205453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.653 19:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:35.910 [2024-12-06 19:26:46.315555] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:37.808 Initializing NVMe Controllers 00:28:37.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:37.808 controller IO queue size 128 less than required 00:28:37.808 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:37.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:37.808 Initialization complete. Launching workers. 00:28:37.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28976 00:28:37.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29033, failed to submit 66 00:28:37.808 success 28976, unsuccessful 57, failed 0 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.808 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.808 rmmod nvme_tcp 00:28:37.808 rmmod nvme_fabrics 00:28:37.808 rmmod nvme_keyring 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1235239 ']' 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1235239 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1235239 ']' 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1235239 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235239 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235239' 00:28:38.066 killing process with pid 1235239 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1235239 00:28:38.066 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1235239 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.326 19:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.233 00:28:40.233 real 0m7.312s 00:28:40.233 user 0m9.225s 00:28:40.233 sys 0m2.896s 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.233 ************************************ 00:28:40.233 END TEST nvmf_abort 00:28:40.233 ************************************ 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:40.233 ************************************ 00:28:40.233 START TEST nvmf_ns_hotplug_stress 00:28:40.233 ************************************ 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:40.233 * Looking for test storage... 00:28:40.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:40.233 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.492 --rc genhtml_branch_coverage=1 00:28:40.492 --rc genhtml_function_coverage=1 00:28:40.492 --rc genhtml_legend=1 00:28:40.492 --rc geninfo_all_blocks=1 00:28:40.492 --rc geninfo_unexecuted_blocks=1 00:28:40.492 00:28:40.492 ' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.492 --rc genhtml_branch_coverage=1 00:28:40.492 --rc genhtml_function_coverage=1 00:28:40.492 --rc genhtml_legend=1 00:28:40.492 --rc geninfo_all_blocks=1 00:28:40.492 --rc geninfo_unexecuted_blocks=1 00:28:40.492 00:28:40.492 ' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.492 --rc genhtml_branch_coverage=1 00:28:40.492 --rc genhtml_function_coverage=1 00:28:40.492 --rc genhtml_legend=1 00:28:40.492 --rc geninfo_all_blocks=1 00:28:40.492 --rc geninfo_unexecuted_blocks=1 00:28:40.492 00:28:40.492 ' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.492 --rc genhtml_branch_coverage=1 00:28:40.492 --rc genhtml_function_coverage=1 00:28:40.492 --rc genhtml_legend=1 00:28:40.492 --rc geninfo_all_blocks=1 00:28:40.492 --rc geninfo_unexecuted_blocks=1 00:28:40.492 00:28:40.492 ' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.492 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.493 19:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.018 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.018 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.018 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.018 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.018 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.019 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:28:43.020 00:28:43.020 --- 10.0.0.2 ping statistics --- 00:28:43.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.020 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:43.020 00:28:43.020 --- 10.0.0.1 ping statistics --- 00:28:43.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.020 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1237462 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1237462 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1237462 ']' 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.020 [2024-12-06 19:26:53.224440] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.020 [2024-12-06 19:26:53.225608] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:43.020 [2024-12-06 19:26:53.225677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.020 [2024-12-06 19:26:53.297298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.020 [2024-12-06 19:26:53.357124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.020 [2024-12-06 19:26:53.357189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.020 [2024-12-06 19:26:53.357217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.020 [2024-12-06 19:26:53.357228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.020 [2024-12-06 19:26:53.357238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.020 [2024-12-06 19:26:53.358863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.020 [2024-12-06 19:26:53.358896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.020 [2024-12-06 19:26:53.358900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.020 [2024-12-06 19:26:53.456751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.020 [2024-12-06 19:26:53.457004] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.020 [2024-12-06 19:26:53.457014] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.020 [2024-12-06 19:26:53.457248] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:43.020 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.277 [2024-12-06 19:26:53.767752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.277 19:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:43.535 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.792 [2024-12-06 19:26:54.312075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.792 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.049 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:44.305 Malloc0 00:28:44.562 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:44.818 Delay0 00:28:44.818 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.074 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:45.331 NULL1 00:28:45.331 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:45.589 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1237876 00:28:45.589 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:45.589 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:45.589 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.849 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.106 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:46.106 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:46.364 true 00:28:46.364 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:46.364 19:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.621 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.879 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:46.879 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:47.136 true 00:28:47.136 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:47.136 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.395 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.653 19:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:47.653 19:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:47.911 true 00:28:47.911 19:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:47.911 19:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.860 Read completed with error (sct=0, sc=11) 00:28:48.861 19:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.121 19:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:49.121 19:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:49.378 true 00:28:49.635 19:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:49.635 19:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.892 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.150 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:50.150 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:50.408 true 00:28:50.408 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:50.408 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.341 19:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.341 19:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:51.341 19:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:51.599 true 00:28:51.599 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:51.599 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.858 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.116 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:52.116 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:52.374 true 00:28:52.374 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:52.374 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.633 19:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.199 19:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:53.199 19:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:53.199 true 00:28:53.199 19:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:53.199 19:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.133 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.699 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:54.699 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:54.699 true 00:28:54.699 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:54.699 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.955 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.212 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:55.212 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:55.775 true 00:28:55.775 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:55.775 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.775 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.032 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:56.032 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:56.289 true 00:28:56.289 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:56.289 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.221 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.787 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:57.787 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:57.787 true 00:28:57.787 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:57.787 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.353 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.353 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:58.353 19:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:58.610 true 00:28:58.868 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:58.868 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.125 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.383 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:59.383 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:59.642 true 00:28:59.642 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:28:59.642 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.574 19:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.832 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:00.832 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:01.089 true 00:29:01.089 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:01.090 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.347 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.604 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:01.604 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:01.862 true 00:29:01.862 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:01.862 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.120 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.378 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:02.378 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:02.635 true 00:29:02.635 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:02.635 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.003 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.003 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:04.003 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:04.259 true 00:29:04.259 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:04.259 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.516 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.772 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:04.772 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:05.028 true 00:29:05.028 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:05.028 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.283 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.539 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:05.539 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:05.796 true 00:29:05.796 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:05.796 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.724 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.981 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:06.981 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:07.238 true 00:29:07.238 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:07.239 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.496 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.754 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:07.754 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:08.012 true 00:29:08.012 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:08.012 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.270 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.527 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:08.527 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:08.785 true 00:29:08.785 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:08.785 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.174 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.174 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:10.174 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:10.516 true 00:29:10.516 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:10.516 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.795 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.053 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:11.053 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:11.311 true 00:29:11.311 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:11.311 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.570 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.828 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:11.828 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:12.086 true 00:29:12.086 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:12.086 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.018 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.276 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:13.276 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:13.534 true 00:29:13.534 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:13.534 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.791 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.049 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:14.049 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:14.306 true 00:29:14.306 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:14.306 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.564 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.820 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:14.820 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:15.077 true 00:29:15.077 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:15.077 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.449 Initializing NVMe Controllers 00:29:16.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.449 Controller IO queue size 128, less than required. 00:29:16.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.449 Controller IO queue size 128, less than required. 00:29:16.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:16.449 Initialization complete. Launching workers. 00:29:16.449 ======================================================== 00:29:16.449 Latency(us) 00:29:16.449 Device Information : IOPS MiB/s Average min max 00:29:16.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 272.97 0.13 169874.08 3496.05 1014493.81 00:29:16.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7579.57 3.70 16837.51 2294.26 366750.49 00:29:16.449 ======================================================== 00:29:16.449 Total : 7852.53 3.83 22157.30 2294.26 1014493.81 00:29:16.449 00:29:16.449 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.449 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:16.449 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:16.707 true 00:29:16.707 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1237876 00:29:16.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1237876) - No such process 00:29:16.707 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1237876 00:29:16.707 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.965 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:17.223 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:17.223 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:17.223 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:17.223 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.223 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:17.482 null0 00:29:17.482 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:17.482 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.482 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:17.740 null1 00:29:17.998 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:17.998 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.998 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:18.256 null2 00:29:18.256 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.256 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.256 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:18.518 null3 00:29:18.518 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.518 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.518 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:18.776 null4 00:29:18.776 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.776 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.776 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:19.034 null5 00:29:19.034 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.034 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.034 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:19.292 null6 00:29:19.292 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.292 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.292 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:19.552 null7 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1242514 1242515 1242517 1242519 1242521 1242523 1242525 1242527 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.552 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:19.810 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.069 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.328 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.586 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.845 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.845 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.845 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.845 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.103 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.361 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.362 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.620 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.879 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.137 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.138 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.396 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.655 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.913 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.172 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.430 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.430 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.430 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.431 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.431 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.431 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.431 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.431 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:23.689 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.690 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.690 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.948 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.206 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.464 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:24.721 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:24.721 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.978 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:25.234 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.492 rmmod nvme_tcp 00:29:25.492 rmmod nvme_fabrics 00:29:25.492 rmmod nvme_keyring 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1237462 ']' 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1237462 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1237462 ']' 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1237462 00:29:25.492 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237462 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237462' 00:29:25.492 killing process with pid 1237462 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1237462 00:29:25.492 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1237462 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.750 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.751 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.751 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.751 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.751 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.279 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.279 00:29:28.279 real 0m47.538s 00:29:28.279 user 3m20.301s 00:29:28.279 sys 0m21.166s 00:29:28.279 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.279 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.280 ************************************ 00:29:28.280 END TEST nvmf_ns_hotplug_stress 00:29:28.280 ************************************ 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:28.280 ************************************ 00:29:28.280 START TEST nvmf_delete_subsystem 00:29:28.280 ************************************ 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:28.280 * Looking for test storage... 00:29:28.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.280 --rc genhtml_branch_coverage=1 00:29:28.280 --rc genhtml_function_coverage=1 00:29:28.280 --rc genhtml_legend=1 00:29:28.280 --rc geninfo_all_blocks=1 00:29:28.280 --rc geninfo_unexecuted_blocks=1 00:29:28.280 00:29:28.280 ' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.280 --rc genhtml_branch_coverage=1 00:29:28.280 --rc genhtml_function_coverage=1 00:29:28.280 --rc genhtml_legend=1 00:29:28.280 --rc geninfo_all_blocks=1 00:29:28.280 --rc geninfo_unexecuted_blocks=1 00:29:28.280 00:29:28.280 ' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.280 --rc genhtml_branch_coverage=1 00:29:28.280 --rc genhtml_function_coverage=1 00:29:28.280 --rc genhtml_legend=1 00:29:28.280 --rc geninfo_all_blocks=1 00:29:28.280 --rc geninfo_unexecuted_blocks=1 00:29:28.280 00:29:28.280 ' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.280 --rc genhtml_branch_coverage=1 00:29:28.280 --rc genhtml_function_coverage=1 00:29:28.280 --rc genhtml_legend=1 00:29:28.280 --rc geninfo_all_blocks=1 00:29:28.280 --rc geninfo_unexecuted_blocks=1 00:29:28.280 00:29:28.280 ' 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.280 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.281 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.186 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:29:30.187 00:29:30.187 --- 10.0.0.2 ping statistics --- 00:29:30.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.187 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:29:30.187 00:29:30.187 --- 10.0.0.1 ping statistics --- 00:29:30.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.187 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.187 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1245276 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1245276 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1245276 ']' 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.188 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.188 [2024-12-06 19:27:40.738238] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.188 [2024-12-06 19:27:40.739293] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:30.188 [2024-12-06 19:27:40.739350] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.447 [2024-12-06 19:27:40.812231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:30.447 [2024-12-06 19:27:40.866766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.447 [2024-12-06 19:27:40.866829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.447 [2024-12-06 19:27:40.866856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.447 [2024-12-06 19:27:40.866868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.447 [2024-12-06 19:27:40.866877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.447 [2024-12-06 19:27:40.868225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.447 [2024-12-06 19:27:40.868230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.447 [2024-12-06 19:27:40.953197] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.447 [2024-12-06 19:27:40.953208] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.447 [2024-12-06 19:27:40.953474] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:30.447 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.447 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:30.447 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.447 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.447 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.447 [2024-12-06 19:27:41.008931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.447 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.705 [2024-12-06 19:27:41.029099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.705 NULL1 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.705 Delay0 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1245416 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:30.705 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:30.705 [2024-12-06 19:27:41.107903] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:32.603 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.603 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.603 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.871 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 [2024-12-06 19:27:43.404063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0b6800d350 is same with the state(6) to be set 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 starting I/O failed: -6 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 [2024-12-06 19:27:43.404800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc42c0 is same with the state(6) to be set 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Write completed with error (sct=0, sc=8) 00:29:32.872 Read completed with error (sct=0, sc=8) 00:29:32.873 Read completed with error (sct=0, sc=8) 00:29:32.873 Read completed with error (sct=0, sc=8) 00:29:33.805 [2024-12-06 19:27:44.369588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc59b0 is same with the state(6) to be set 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 [2024-12-06 19:27:44.403850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc44a0 is same with the state(6) to be set 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 [2024-12-06 19:27:44.404028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4860 is same with the state(6) to be set 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 [2024-12-06 19:27:44.404161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0b6800d020 is same with the state(6) to be set 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Write completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 Read completed with error (sct=0, sc=8) 00:29:34.065 [2024-12-06 19:27:44.405046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0b6800d680 is same with the state(6) to be set 00:29:34.065 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.065 Initializing NVMe Controllers 00:29:34.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.065 Controller IO queue size 128, less than required. 00:29:34.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:34.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:34.065 Initialization complete. Launching workers. 00:29:34.065 ======================================================== 00:29:34.065 Latency(us) 00:29:34.065 Device Information : IOPS MiB/s Average min max 00:29:34.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.72 0.08 908669.00 430.48 1012992.44 00:29:34.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.33 0.07 959923.59 579.06 2001567.14 00:29:34.065 ======================================================== 00:29:34.065 Total : 314.05 0.15 933203.19 430.48 2001567.14 00:29:34.065 00:29:34.065 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:34.065 [2024-12-06 19:27:44.405584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc59b0 (9): Bad file descriptor 00:29:34.065 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1245416 00:29:34.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:34.065 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1245416 00:29:34.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1245416) - No such process 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1245416 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1245416 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1245416 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 [2024-12-06 19:27:44.925120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1245822 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:34.633 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:34.633 [2024-12-06 19:27:44.989101] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:34.890 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:34.890 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:34.890 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:35.455 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:35.455 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:35.455 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:36.018 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:36.019 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:36.019 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:36.584 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:36.584 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:36.584 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.149 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:37.149 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:37.149 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.406 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:37.406 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:37.406 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.022 Initializing NVMe Controllers 00:29:38.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.022 Controller IO queue size 128, less than required. 00:29:38.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:38.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:38.022 Initialization complete. Launching workers. 00:29:38.022 ======================================================== 00:29:38.022 Latency(us) 00:29:38.022 Device Information : IOPS MiB/s Average min max 00:29:38.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005703.35 1000232.51 1042734.13 00:29:38.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005113.02 1000196.24 1041752.55 00:29:38.022 ======================================================== 00:29:38.022 Total : 256.00 0.12 1005408.19 1000196.24 1042734.13 00:29:38.022 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1245822 00:29:38.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1245822) - No such process 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1245822 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.022 rmmod nvme_tcp 00:29:38.022 rmmod nvme_fabrics 00:29:38.022 rmmod nvme_keyring 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1245276 ']' 00:29:38.022 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1245276 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1245276 ']' 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1245276 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245276 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245276' 00:29:38.023 killing process with pid 1245276 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1245276 00:29:38.023 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1245276 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.304 19:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.847 00:29:40.847 real 0m12.482s 00:29:40.847 user 0m25.073s 00:29:40.847 sys 0m3.846s 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:40.847 ************************************ 00:29:40.847 END TEST nvmf_delete_subsystem 00:29:40.847 ************************************ 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:40.847 ************************************ 00:29:40.847 START TEST nvmf_host_management 00:29:40.847 ************************************ 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:40.847 * Looking for test storage... 00:29:40.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:40.847 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.848 --rc genhtml_branch_coverage=1 00:29:40.848 --rc genhtml_function_coverage=1 00:29:40.848 --rc genhtml_legend=1 00:29:40.848 --rc geninfo_all_blocks=1 00:29:40.848 --rc geninfo_unexecuted_blocks=1 00:29:40.848 00:29:40.848 ' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.848 --rc genhtml_branch_coverage=1 00:29:40.848 --rc genhtml_function_coverage=1 00:29:40.848 --rc genhtml_legend=1 00:29:40.848 --rc geninfo_all_blocks=1 00:29:40.848 --rc geninfo_unexecuted_blocks=1 00:29:40.848 00:29:40.848 ' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.848 --rc genhtml_branch_coverage=1 00:29:40.848 --rc genhtml_function_coverage=1 00:29:40.848 --rc genhtml_legend=1 00:29:40.848 --rc geninfo_all_blocks=1 00:29:40.848 --rc geninfo_unexecuted_blocks=1 00:29:40.848 00:29:40.848 ' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.848 --rc genhtml_branch_coverage=1 00:29:40.848 --rc genhtml_function_coverage=1 00:29:40.848 --rc genhtml_legend=1 00:29:40.848 --rc geninfo_all_blocks=1 00:29:40.848 --rc geninfo_unexecuted_blocks=1 00:29:40.848 00:29:40.848 ' 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.848 19:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.848 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.849 19:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:42.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:42.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:42.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:42.752 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.752 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:29:42.753 00:29:42.753 --- 10.0.0.2 ping statistics --- 00:29:42.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.753 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:29:42.753 00:29:42.753 --- 10.0.0.1 ping statistics --- 00:29:42.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.753 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.753 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1248279 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1248279 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1248279 ']' 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.011 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.012 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.012 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.012 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.012 [2024-12-06 19:27:53.390470] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:43.012 [2024-12-06 19:27:53.391577] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:43.012 [2024-12-06 19:27:53.391643] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.012 [2024-12-06 19:27:53.462858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.012 [2024-12-06 19:27:53.523903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.012 [2024-12-06 19:27:53.523978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.012 [2024-12-06 19:27:53.523992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.012 [2024-12-06 19:27:53.524008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.012 [2024-12-06 19:27:53.524018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.012 [2024-12-06 19:27:53.525447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.012 [2024-12-06 19:27:53.525505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.012 [2024-12-06 19:27:53.525570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.012 [2024-12-06 19:27:53.525574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.270 [2024-12-06 19:27:53.614512] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:43.270 [2024-12-06 19:27:53.614744] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:43.270 [2024-12-06 19:27:53.615065] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:43.270 [2024-12-06 19:27:53.615692] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:43.270 [2024-12-06 19:27:53.615933] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 [2024-12-06 19:27:53.666330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 Malloc0 00:29:43.270 [2024-12-06 19:27:53.738491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1248336 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1248336 /var/tmp/bdevperf.sock 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1248336 ']' 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:43.270 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:43.271 { 00:29:43.271 "params": { 00:29:43.271 "name": "Nvme$subsystem", 00:29:43.271 "trtype": "$TEST_TRANSPORT", 00:29:43.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:43.271 "adrfam": "ipv4", 00:29:43.271 "trsvcid": "$NVMF_PORT", 00:29:43.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:43.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:43.271 "hdgst": ${hdgst:-false}, 00:29:43.271 "ddgst": ${ddgst:-false} 00:29:43.271 }, 00:29:43.271 "method": "bdev_nvme_attach_controller" 00:29:43.271 } 00:29:43.271 EOF 00:29:43.271 )") 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:43.271 19:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:43.271 "params": { 00:29:43.271 "name": "Nvme0", 00:29:43.271 "trtype": "tcp", 00:29:43.271 "traddr": "10.0.0.2", 00:29:43.271 "adrfam": "ipv4", 00:29:43.271 "trsvcid": "4420", 00:29:43.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:43.271 "hdgst": false, 00:29:43.271 "ddgst": false 00:29:43.271 }, 00:29:43.271 "method": "bdev_nvme_attach_controller" 00:29:43.271 }' 00:29:43.271 [2024-12-06 19:27:53.822060] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:43.271 [2024-12-06 19:27:53.822133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248336 ] 00:29:43.529 [2024-12-06 19:27:53.892218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.529 [2024-12-06 19:27:53.951415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.787 Running I/O for 10 seconds... 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.787 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.045 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.045 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:44.045 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:44.045 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=490 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 490 -ge 100 ']' 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.305 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.305 [2024-12-06 19:27:54.690315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.305 [2024-12-06 19:27:54.690919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.690999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 [2024-12-06 19:27:54.691145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5c80 is same with the state(6) to be set 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.306 [2024-12-06 19:27:54.700044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.306 [2024-12-06 19:27:54.700086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.700115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.306 [2024-12-06 19:27:54.700133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.700147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.306 [2024-12-06 19:27:54.700162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.700176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.306 [2024-12-06 19:27:54.700190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.700203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c27660 is same with the state(6) to be set 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.306 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:44.306 [2024-12-06 19:27:54.714389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c27660 (9): Bad file descriptor 00:29:44.306 [2024-12-06 19:27:54.714494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.306 [2024-12-06 19:27:54.714923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.306 [2024-12-06 19:27:54.714938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.714952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.714976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.714989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.307 [2024-12-06 19:27:54.715801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.307 [2024-12-06 19:27:54.715816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.715844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.715873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.715902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.715930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.715971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.715987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.716396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.308 [2024-12-06 19:27:54.716410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.308 [2024-12-06 19:27:54.717586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:44.308 task offset: 73344 on job bdev=Nvme0n1 fails 00:29:44.308 00:29:44.308 Latency(us) 00:29:44.308 [2024-12-06T18:27:54.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.308 Job: Nvme0n1 ended in about 0.42 seconds with error 00:29:44.308 Verification LBA range: start 0x0 length 0x400 00:29:44.308 Nvme0n1 : 0.42 1351.61 84.48 150.96 0.00 41348.86 2427.26 35729.26 00:29:44.308 [2024-12-06T18:27:54.885Z] =================================================================================================================== 00:29:44.308 [2024-12-06T18:27:54.885Z] Total : 1351.61 84.48 150.96 0.00 41348.86 2427.26 35729.26 00:29:44.308 [2024-12-06 19:27:54.719461] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:44.308 [2024-12-06 19:27:54.811816] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1248336 00:29:45.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1248336) - No such process 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.239 { 00:29:45.239 "params": { 00:29:45.239 "name": "Nvme$subsystem", 00:29:45.239 "trtype": "$TEST_TRANSPORT", 00:29:45.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.239 "adrfam": "ipv4", 00:29:45.239 "trsvcid": "$NVMF_PORT", 00:29:45.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.239 "hdgst": ${hdgst:-false}, 00:29:45.239 "ddgst": ${ddgst:-false} 00:29:45.239 }, 00:29:45.239 "method": "bdev_nvme_attach_controller" 00:29:45.239 } 00:29:45.239 EOF 00:29:45.239 )") 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:45.239 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.239 "params": { 00:29:45.239 "name": "Nvme0", 00:29:45.239 "trtype": "tcp", 00:29:45.239 "traddr": "10.0.0.2", 00:29:45.239 "adrfam": "ipv4", 00:29:45.239 "trsvcid": "4420", 00:29:45.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:45.239 "hdgst": false, 00:29:45.239 "ddgst": false 00:29:45.239 }, 00:29:45.239 "method": "bdev_nvme_attach_controller" 00:29:45.239 }' 00:29:45.239 [2024-12-06 19:27:55.754404] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:45.239 [2024-12-06 19:27:55.754475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248606 ] 00:29:45.497 [2024-12-06 19:27:55.823235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.497 [2024-12-06 19:27:55.882394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.497 Running I/O for 1 seconds... 00:29:46.871 1536.00 IOPS, 96.00 MiB/s 00:29:46.871 Latency(us) 00:29:46.871 [2024-12-06T18:27:57.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.871 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.871 Verification LBA range: start 0x0 length 0x400 00:29:46.871 Nvme0n1 : 1.01 1581.32 98.83 0.00 0.00 39820.16 6359.42 36117.62 00:29:46.871 [2024-12-06T18:27:57.448Z] =================================================================================================================== 00:29:46.871 [2024-12-06T18:27:57.448Z] Total : 1581.32 98.83 0.00 0.00 39820.16 6359.42 36117.62 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.871 rmmod nvme_tcp 00:29:46.871 rmmod nvme_fabrics 00:29:46.871 rmmod nvme_keyring 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1248279 ']' 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1248279 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1248279 ']' 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1248279 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248279 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248279' 00:29:46.871 killing process with pid 1248279 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1248279 00:29:46.871 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1248279 00:29:47.130 [2024-12-06 19:27:57.619750] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.130 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.664 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:49.665 00:29:49.665 real 0m8.835s 00:29:49.665 user 0m17.333s 00:29:49.665 sys 0m3.761s 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:49.665 ************************************ 00:29:49.665 END TEST nvmf_host_management 00:29:49.665 ************************************ 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.665 ************************************ 00:29:49.665 START TEST nvmf_lvol 00:29:49.665 ************************************ 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:49.665 * Looking for test storage... 00:29:49.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.665 --rc genhtml_branch_coverage=1 00:29:49.665 --rc genhtml_function_coverage=1 00:29:49.665 --rc genhtml_legend=1 00:29:49.665 --rc geninfo_all_blocks=1 00:29:49.665 --rc geninfo_unexecuted_blocks=1 00:29:49.665 00:29:49.665 ' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.665 --rc genhtml_branch_coverage=1 00:29:49.665 --rc genhtml_function_coverage=1 00:29:49.665 --rc genhtml_legend=1 00:29:49.665 --rc geninfo_all_blocks=1 00:29:49.665 --rc geninfo_unexecuted_blocks=1 00:29:49.665 00:29:49.665 ' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.665 --rc genhtml_branch_coverage=1 00:29:49.665 --rc genhtml_function_coverage=1 00:29:49.665 --rc genhtml_legend=1 00:29:49.665 --rc geninfo_all_blocks=1 00:29:49.665 --rc geninfo_unexecuted_blocks=1 00:29:49.665 00:29:49.665 ' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.665 --rc genhtml_branch_coverage=1 00:29:49.665 --rc genhtml_function_coverage=1 00:29:49.665 --rc genhtml_legend=1 00:29:49.665 --rc geninfo_all_blocks=1 00:29:49.665 --rc geninfo_unexecuted_blocks=1 00:29:49.665 00:29:49.665 ' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.665 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.666 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:51.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:51.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.569 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:51.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.570 19:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:51.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:51.570 00:29:51.570 --- 10.0.0.2 ping statistics --- 00:29:51.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.570 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:29:51.570 00:29:51.570 --- 10.0.0.1 ping statistics --- 00:29:51.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.570 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.570 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1250686 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1250686 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1250686 ']' 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.829 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:51.829 [2024-12-06 19:28:02.213490] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:51.830 [2024-12-06 19:28:02.214632] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:51.830 [2024-12-06 19:28:02.214730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.830 [2024-12-06 19:28:02.287325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.830 [2024-12-06 19:28:02.347729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.830 [2024-12-06 19:28:02.347785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.830 [2024-12-06 19:28:02.347799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.830 [2024-12-06 19:28:02.347810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.830 [2024-12-06 19:28:02.347820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.830 [2024-12-06 19:28:02.349266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.830 [2024-12-06 19:28:02.349327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.830 [2024-12-06 19:28:02.349330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.088 [2024-12-06 19:28:02.439952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.088 [2024-12-06 19:28:02.440203] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.088 [2024-12-06 19:28:02.440206] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:52.088 [2024-12-06 19:28:02.440452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.088 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.346 [2024-12-06 19:28:02.741987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.346 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:52.605 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:52.605 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:52.863 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:52.863 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:53.121 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:53.379 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eea9da2b-3dd6-4d0d-b6cf-ea80c2fe3d06 00:29:53.379 19:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eea9da2b-3dd6-4d0d-b6cf-ea80c2fe3d06 lvol 20 00:29:53.637 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c984558f-7b38-4bd6-a356-a7469f59cebc 00:29:53.637 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:54.200 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c984558f-7b38-4bd6-a356-a7469f59cebc 00:29:54.200 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.457 [2024-12-06 19:28:05.018232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.723 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.981 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1251112 00:29:54.981 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:54.981 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:55.913 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c984558f-7b38-4bd6-a356-a7469f59cebc MY_SNAPSHOT 00:29:56.170 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d511637f-8465-4f5f-8dcf-d90475f580fe 00:29:56.170 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c984558f-7b38-4bd6-a356-a7469f59cebc 30 00:29:56.428 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d511637f-8465-4f5f-8dcf-d90475f580fe MY_CLONE 00:29:56.685 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d6c4e840-7c38-4658-b02c-cc14b52bb6d4 00:29:56.685 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d6c4e840-7c38-4658-b02c-cc14b52bb6d4 00:29:57.253 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1251112 00:30:05.365 Initializing NVMe Controllers 00:30:05.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:05.365 Controller IO queue size 128, less than required. 00:30:05.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:05.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:05.365 Initialization complete. Launching workers. 00:30:05.365 ======================================================== 00:30:05.365 Latency(us) 00:30:05.365 Device Information : IOPS MiB/s Average min max 00:30:05.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10541.70 41.18 12142.51 4673.62 88623.50 00:30:05.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10438.50 40.78 12268.87 5012.39 73396.28 00:30:05.365 ======================================================== 00:30:05.365 Total : 20980.20 81.95 12205.38 4673.62 88623.50 00:30:05.365 00:30:05.365 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.625 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c984558f-7b38-4bd6-a356-a7469f59cebc 00:30:05.881 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eea9da2b-3dd6-4d0d-b6cf-ea80c2fe3d06 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.138 rmmod nvme_tcp 00:30:06.138 rmmod nvme_fabrics 00:30:06.138 rmmod nvme_keyring 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1250686 ']' 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1250686 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1250686 ']' 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1250686 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250686 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250686' 00:30:06.138 killing process with pid 1250686 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1250686 00:30:06.138 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1250686 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.397 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.930 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.930 00:30:08.930 real 0m19.230s 00:30:08.930 user 0m56.734s 00:30:08.930 sys 0m7.563s 00:30:08.930 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.930 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:08.930 ************************************ 00:30:08.930 END TEST nvmf_lvol 00:30:08.930 ************************************ 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:08.930 ************************************ 00:30:08.930 START TEST nvmf_lvs_grow 00:30:08.930 ************************************ 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:08.930 * Looking for test storage... 00:30:08.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:08.930 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:08.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.931 --rc genhtml_branch_coverage=1 00:30:08.931 --rc genhtml_function_coverage=1 00:30:08.931 --rc genhtml_legend=1 00:30:08.931 --rc geninfo_all_blocks=1 00:30:08.931 --rc geninfo_unexecuted_blocks=1 00:30:08.931 00:30:08.931 ' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:08.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.931 --rc genhtml_branch_coverage=1 00:30:08.931 --rc genhtml_function_coverage=1 00:30:08.931 --rc genhtml_legend=1 00:30:08.931 --rc geninfo_all_blocks=1 00:30:08.931 --rc geninfo_unexecuted_blocks=1 00:30:08.931 00:30:08.931 ' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:08.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.931 --rc genhtml_branch_coverage=1 00:30:08.931 --rc genhtml_function_coverage=1 00:30:08.931 --rc genhtml_legend=1 00:30:08.931 --rc geninfo_all_blocks=1 00:30:08.931 --rc geninfo_unexecuted_blocks=1 00:30:08.931 00:30:08.931 ' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:08.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.931 --rc genhtml_branch_coverage=1 00:30:08.931 --rc genhtml_function_coverage=1 00:30:08.931 --rc genhtml_legend=1 00:30:08.931 --rc geninfo_all_blocks=1 00:30:08.931 --rc geninfo_unexecuted_blocks=1 00:30:08.931 00:30:08.931 ' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.931 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.932 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:10.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:10.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.834 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:10.835 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:10.835 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.835 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:30:11.095 00:30:11.095 --- 10.0.0.2 ping statistics --- 00:30:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.095 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:30:11.095 00:30:11.095 --- 10.0.0.1 ping statistics --- 00:30:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.095 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1254373 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1254373 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1254373 ']' 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.095 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.095 [2024-12-06 19:28:21.502831] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.095 [2024-12-06 19:28:21.503899] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:11.095 [2024-12-06 19:28:21.503966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.095 [2024-12-06 19:28:21.575419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.095 [2024-12-06 19:28:21.636053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.095 [2024-12-06 19:28:21.636108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.095 [2024-12-06 19:28:21.636137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.095 [2024-12-06 19:28:21.636154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.095 [2024-12-06 19:28:21.636164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.095 [2024-12-06 19:28:21.636812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.354 [2024-12-06 19:28:21.725380] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.354 [2024-12-06 19:28:21.725679] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.354 19:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.613 [2024-12-06 19:28:22.037389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.613 ************************************ 00:30:11.613 START TEST lvs_grow_clean 00:30:11.613 ************************************ 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:11.613 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:11.871 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:11.871 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:12.130 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:12.130 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:12.130 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:12.389 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:12.389 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:12.389 19:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2491928-22be-4dd8-a46d-48f36e3eb328 lvol 150 00:30:12.647 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=90f875e2-ac50-4d38-9030-505550f27d04 00:30:12.647 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.647 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:12.906 [2024-12-06 19:28:23.477295] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:12.906 [2024-12-06 19:28:23.477399] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:12.906 true 00:30:13.165 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:13.165 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:13.423 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:13.423 19:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:13.681 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90f875e2-ac50-4d38-9030-505550f27d04 00:30:13.940 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.197 [2024-12-06 19:28:24.593598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.197 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1254804 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1254804 /var/tmp/bdevperf.sock 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1254804 ']' 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:14.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.456 19:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:14.456 [2024-12-06 19:28:24.925866] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:14.456 [2024-12-06 19:28:24.925948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254804 ] 00:30:14.456 [2024-12-06 19:28:24.994460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.713 [2024-12-06 19:28:25.056213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.713 19:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.713 19:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:14.713 19:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:15.278 Nvme0n1 00:30:15.278 19:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:15.536 [ 00:30:15.536 { 00:30:15.536 "name": "Nvme0n1", 00:30:15.536 "aliases": [ 00:30:15.536 "90f875e2-ac50-4d38-9030-505550f27d04" 00:30:15.536 ], 00:30:15.536 "product_name": "NVMe disk", 00:30:15.536 "block_size": 4096, 00:30:15.536 "num_blocks": 38912, 00:30:15.536 "uuid": "90f875e2-ac50-4d38-9030-505550f27d04", 00:30:15.536 "numa_id": 0, 00:30:15.536 "assigned_rate_limits": { 00:30:15.536 "rw_ios_per_sec": 0, 00:30:15.536 "rw_mbytes_per_sec": 0, 00:30:15.536 "r_mbytes_per_sec": 0, 00:30:15.536 "w_mbytes_per_sec": 0 00:30:15.536 }, 00:30:15.536 "claimed": false, 00:30:15.536 "zoned": false, 00:30:15.536 "supported_io_types": { 00:30:15.536 "read": true, 00:30:15.536 "write": true, 00:30:15.536 "unmap": true, 00:30:15.536 "flush": true, 00:30:15.536 "reset": true, 00:30:15.536 "nvme_admin": true, 00:30:15.536 "nvme_io": true, 00:30:15.536 "nvme_io_md": false, 00:30:15.536 "write_zeroes": true, 00:30:15.536 "zcopy": false, 00:30:15.536 "get_zone_info": false, 00:30:15.536 "zone_management": false, 00:30:15.536 "zone_append": false, 00:30:15.536 "compare": true, 00:30:15.536 "compare_and_write": true, 00:30:15.536 "abort": true, 00:30:15.536 "seek_hole": false, 00:30:15.536 "seek_data": false, 00:30:15.536 "copy": true, 00:30:15.536 "nvme_iov_md": false 00:30:15.536 }, 00:30:15.536 "memory_domains": [ 00:30:15.536 { 00:30:15.536 "dma_device_id": "system", 00:30:15.536 "dma_device_type": 1 00:30:15.536 } 00:30:15.536 ], 00:30:15.536 "driver_specific": { 00:30:15.536 "nvme": [ 00:30:15.536 { 00:30:15.536 "trid": { 00:30:15.536 "trtype": "TCP", 00:30:15.536 "adrfam": "IPv4", 00:30:15.536 "traddr": "10.0.0.2", 00:30:15.536 "trsvcid": "4420", 00:30:15.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:15.536 }, 00:30:15.536 "ctrlr_data": { 00:30:15.536 "cntlid": 1, 00:30:15.536 "vendor_id": "0x8086", 00:30:15.536 "model_number": "SPDK bdev Controller", 00:30:15.536 "serial_number": "SPDK0", 00:30:15.536 "firmware_revision": "25.01", 00:30:15.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.536 "oacs": { 00:30:15.537 "security": 0, 00:30:15.537 "format": 0, 00:30:15.537 "firmware": 0, 00:30:15.537 "ns_manage": 0 00:30:15.537 }, 00:30:15.537 "multi_ctrlr": true, 00:30:15.537 "ana_reporting": false 00:30:15.537 }, 00:30:15.537 "vs": { 00:30:15.537 "nvme_version": "1.3" 00:30:15.537 }, 00:30:15.537 "ns_data": { 00:30:15.537 "id": 1, 00:30:15.537 "can_share": true 00:30:15.537 } 00:30:15.537 } 00:30:15.537 ], 00:30:15.537 "mp_policy": "active_passive" 00:30:15.537 } 00:30:15.537 } 00:30:15.537 ] 00:30:15.537 19:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1254945 00:30:15.537 19:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:15.537 19:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:15.794 Running I/O for 10 seconds... 00:30:16.728 Latency(us) 00:30:16.728 [2024-12-06T18:28:27.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.728 Nvme0n1 : 1.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:30:16.728 [2024-12-06T18:28:27.305Z] =================================================================================================================== 00:30:16.728 [2024-12-06T18:28:27.305Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:30:16.728 00:30:17.773 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:17.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.773 Nvme0n1 : 2.00 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:30:17.773 [2024-12-06T18:28:28.350Z] =================================================================================================================== 00:30:17.773 [2024-12-06T18:28:28.350Z] Total : 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:30:17.773 00:30:18.054 true 00:30:18.054 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:18.054 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:18.312 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:18.312 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:18.312 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1254945 00:30:18.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.570 Nvme0n1 : 3.00 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:30:18.570 [2024-12-06T18:28:29.147Z] =================================================================================================================== 00:30:18.570 [2024-12-06T18:28:29.147Z] Total : 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:30:18.570 00:30:19.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.945 Nvme0n1 : 4.00 15478.25 60.46 0.00 0.00 0.00 0.00 0.00 00:30:19.945 [2024-12-06T18:28:30.522Z] =================================================================================================================== 00:30:19.945 [2024-12-06T18:28:30.522Z] Total : 15478.25 60.46 0.00 0.00 0.00 0.00 0.00 00:30:19.945 00:30:20.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.879 Nvme0n1 : 5.00 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 00:30:20.879 [2024-12-06T18:28:31.456Z] =================================================================================================================== 00:30:20.879 [2024-12-06T18:28:31.456Z] Total : 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 00:30:20.879 00:30:21.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.814 Nvme0n1 : 6.00 15642.17 61.10 0.00 0.00 0.00 0.00 0.00 00:30:21.814 [2024-12-06T18:28:32.391Z] =================================================================================================================== 00:30:21.814 [2024-12-06T18:28:32.391Z] Total : 15642.17 61.10 0.00 0.00 0.00 0.00 0.00 00:30:21.814 00:30:22.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.748 Nvme0n1 : 7.00 15720.86 61.41 0.00 0.00 0.00 0.00 0.00 00:30:22.748 [2024-12-06T18:28:33.325Z] =================================================================================================================== 00:30:22.748 [2024-12-06T18:28:33.325Z] Total : 15720.86 61.41 0.00 0.00 0.00 0.00 0.00 00:30:22.748 00:30:23.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.682 Nvme0n1 : 8.00 15795.62 61.70 0.00 0.00 0.00 0.00 0.00 00:30:23.682 [2024-12-06T18:28:34.259Z] =================================================================================================================== 00:30:23.682 [2024-12-06T18:28:34.259Z] Total : 15795.62 61.70 0.00 0.00 0.00 0.00 0.00 00:30:23.682 00:30:24.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.627 Nvme0n1 : 9.00 15846.78 61.90 0.00 0.00 0.00 0.00 0.00 00:30:24.627 [2024-12-06T18:28:35.204Z] =================================================================================================================== 00:30:24.627 [2024-12-06T18:28:35.204Z] Total : 15846.78 61.90 0.00 0.00 0.00 0.00 0.00 00:30:24.627 00:30:26.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.007 Nvme0n1 : 10.00 15868.70 61.99 0.00 0.00 0.00 0.00 0.00 00:30:26.007 [2024-12-06T18:28:36.584Z] =================================================================================================================== 00:30:26.007 [2024-12-06T18:28:36.584Z] Total : 15868.70 61.99 0.00 0.00 0.00 0.00 0.00 00:30:26.007 00:30:26.007 00:30:26.007 Latency(us) 00:30:26.007 [2024-12-06T18:28:36.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.007 Nvme0n1 : 10.00 15868.38 61.99 0.00 0.00 8061.38 5971.06 18932.62 00:30:26.007 [2024-12-06T18:28:36.584Z] =================================================================================================================== 00:30:26.007 [2024-12-06T18:28:36.584Z] Total : 15868.38 61.99 0.00 0.00 8061.38 5971.06 18932.62 00:30:26.007 { 00:30:26.007 "results": [ 00:30:26.007 { 00:30:26.007 "job": "Nvme0n1", 00:30:26.007 "core_mask": "0x2", 00:30:26.007 "workload": "randwrite", 00:30:26.007 "status": "finished", 00:30:26.007 "queue_depth": 128, 00:30:26.007 "io_size": 4096, 00:30:26.007 "runtime": 10.004237, 00:30:26.007 "iops": 15868.37656884778, 00:30:26.007 "mibps": 61.98584597206164, 00:30:26.007 "io_failed": 0, 00:30:26.007 "io_timeout": 0, 00:30:26.007 "avg_latency_us": 8061.381019285501, 00:30:26.007 "min_latency_us": 5971.057777777778, 00:30:26.007 "max_latency_us": 18932.62222222222 00:30:26.007 } 00:30:26.007 ], 00:30:26.007 "core_count": 1 00:30:26.007 } 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1254804 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1254804 ']' 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1254804 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254804 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254804' 00:30:26.007 killing process with pid 1254804 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1254804 00:30:26.007 Received shutdown signal, test time was about 10.000000 seconds 00:30:26.007 00:30:26.007 Latency(us) 00:30:26.007 [2024-12-06T18:28:36.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.007 [2024-12-06T18:28:36.584Z] =================================================================================================================== 00:30:26.007 [2024-12-06T18:28:36.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1254804 00:30:26.007 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.266 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.524 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:26.524 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:26.783 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:26.783 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:26.783 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:27.040 [2024-12-06 19:28:37.573390] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:27.298 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:27.299 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:27.557 request: 00:30:27.557 { 00:30:27.557 "uuid": "a2491928-22be-4dd8-a46d-48f36e3eb328", 00:30:27.557 "method": "bdev_lvol_get_lvstores", 00:30:27.557 "req_id": 1 00:30:27.557 } 00:30:27.557 Got JSON-RPC error response 00:30:27.557 response: 00:30:27.557 { 00:30:27.557 "code": -19, 00:30:27.557 "message": "No such device" 00:30:27.557 } 00:30:27.557 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:27.557 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:27.557 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:27.557 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:27.557 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:27.815 aio_bdev 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 90f875e2-ac50-4d38-9030-505550f27d04 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=90f875e2-ac50-4d38-9030-505550f27d04 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:27.815 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:28.072 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 90f875e2-ac50-4d38-9030-505550f27d04 -t 2000 00:30:28.330 [ 00:30:28.330 { 00:30:28.330 "name": "90f875e2-ac50-4d38-9030-505550f27d04", 00:30:28.330 "aliases": [ 00:30:28.330 "lvs/lvol" 00:30:28.330 ], 00:30:28.330 "product_name": "Logical Volume", 00:30:28.330 "block_size": 4096, 00:30:28.330 "num_blocks": 38912, 00:30:28.330 "uuid": "90f875e2-ac50-4d38-9030-505550f27d04", 00:30:28.330 "assigned_rate_limits": { 00:30:28.330 "rw_ios_per_sec": 0, 00:30:28.330 "rw_mbytes_per_sec": 0, 00:30:28.330 "r_mbytes_per_sec": 0, 00:30:28.330 "w_mbytes_per_sec": 0 00:30:28.330 }, 00:30:28.330 "claimed": false, 00:30:28.330 "zoned": false, 00:30:28.330 "supported_io_types": { 00:30:28.330 "read": true, 00:30:28.330 "write": true, 00:30:28.330 "unmap": true, 00:30:28.330 "flush": false, 00:30:28.330 "reset": true, 00:30:28.330 "nvme_admin": false, 00:30:28.330 "nvme_io": false, 00:30:28.330 "nvme_io_md": false, 00:30:28.330 "write_zeroes": true, 00:30:28.330 "zcopy": false, 00:30:28.330 "get_zone_info": false, 00:30:28.330 "zone_management": false, 00:30:28.330 "zone_append": false, 00:30:28.330 "compare": false, 00:30:28.330 "compare_and_write": false, 00:30:28.330 "abort": false, 00:30:28.330 "seek_hole": true, 00:30:28.330 "seek_data": true, 00:30:28.330 "copy": false, 00:30:28.330 "nvme_iov_md": false 00:30:28.330 }, 00:30:28.330 "driver_specific": { 00:30:28.330 "lvol": { 00:30:28.330 "lvol_store_uuid": "a2491928-22be-4dd8-a46d-48f36e3eb328", 00:30:28.330 "base_bdev": "aio_bdev", 00:30:28.330 "thin_provision": false, 00:30:28.330 "num_allocated_clusters": 38, 00:30:28.330 "snapshot": false, 00:30:28.330 "clone": false, 00:30:28.330 "esnap_clone": false 00:30:28.330 } 00:30:28.330 } 00:30:28.330 } 00:30:28.330 ] 00:30:28.330 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:28.330 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:28.330 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:28.588 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:28.588 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:28.588 19:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:28.846 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:28.846 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90f875e2-ac50-4d38-9030-505550f27d04 00:30:29.103 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2491928-22be-4dd8-a46d-48f36e3eb328 00:30:29.360 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:29.617 00:30:29.617 real 0m18.075s 00:30:29.617 user 0m17.290s 00:30:29.617 sys 0m2.072s 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 ************************************ 00:30:29.617 END TEST lvs_grow_clean 00:30:29.617 ************************************ 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.617 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:29.875 ************************************ 00:30:29.875 START TEST lvs_grow_dirty 00:30:29.875 ************************************ 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:29.875 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:30.132 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:30.132 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:30.389 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:30.389 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:30.389 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:30.647 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:30.647 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:30.647 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b lvol 150 00:30:30.905 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:30.905 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:30.905 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:31.163 [2024-12-06 19:28:41.589321] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:31.163 [2024-12-06 19:28:41.589429] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:31.163 true 00:30:31.163 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:31.163 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:31.421 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:31.421 19:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:31.679 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:31.938 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.196 [2024-12-06 19:28:42.705714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.196 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1256971 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1256971 /var/tmp/bdevperf.sock 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1256971 ']' 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.454 19:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.713 [2024-12-06 19:28:43.037847] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:32.713 [2024-12-06 19:28:43.037935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256971 ] 00:30:32.713 [2024-12-06 19:28:43.105697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.713 [2024-12-06 19:28:43.168216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.713 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.713 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:32.713 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:33.280 Nvme0n1 00:30:33.280 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:33.538 [ 00:30:33.538 { 00:30:33.538 "name": "Nvme0n1", 00:30:33.538 "aliases": [ 00:30:33.538 "1242cf92-d9fc-4ad9-9341-24379a3ea532" 00:30:33.538 ], 00:30:33.538 "product_name": "NVMe disk", 00:30:33.538 "block_size": 4096, 00:30:33.538 "num_blocks": 38912, 00:30:33.538 "uuid": "1242cf92-d9fc-4ad9-9341-24379a3ea532", 00:30:33.538 "numa_id": 0, 00:30:33.538 "assigned_rate_limits": { 00:30:33.538 "rw_ios_per_sec": 0, 00:30:33.538 "rw_mbytes_per_sec": 0, 00:30:33.538 "r_mbytes_per_sec": 0, 00:30:33.538 "w_mbytes_per_sec": 0 00:30:33.538 }, 00:30:33.538 "claimed": false, 00:30:33.538 "zoned": false, 00:30:33.538 "supported_io_types": { 00:30:33.538 "read": true, 00:30:33.538 "write": true, 00:30:33.538 "unmap": true, 00:30:33.538 "flush": true, 00:30:33.538 "reset": true, 00:30:33.538 "nvme_admin": true, 00:30:33.538 "nvme_io": true, 00:30:33.538 "nvme_io_md": false, 00:30:33.538 "write_zeroes": true, 00:30:33.538 "zcopy": false, 00:30:33.538 "get_zone_info": false, 00:30:33.538 "zone_management": false, 00:30:33.538 "zone_append": false, 00:30:33.538 "compare": true, 00:30:33.538 "compare_and_write": true, 00:30:33.538 "abort": true, 00:30:33.538 "seek_hole": false, 00:30:33.538 "seek_data": false, 00:30:33.538 "copy": true, 00:30:33.538 "nvme_iov_md": false 00:30:33.538 }, 00:30:33.538 "memory_domains": [ 00:30:33.538 { 00:30:33.538 "dma_device_id": "system", 00:30:33.538 "dma_device_type": 1 00:30:33.538 } 00:30:33.538 ], 00:30:33.538 "driver_specific": { 00:30:33.538 "nvme": [ 00:30:33.538 { 00:30:33.538 "trid": { 00:30:33.538 "trtype": "TCP", 00:30:33.538 "adrfam": "IPv4", 00:30:33.538 "traddr": "10.0.0.2", 00:30:33.538 "trsvcid": "4420", 00:30:33.538 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:33.538 }, 00:30:33.538 "ctrlr_data": { 00:30:33.538 "cntlid": 1, 00:30:33.538 "vendor_id": "0x8086", 00:30:33.538 "model_number": "SPDK bdev Controller", 00:30:33.538 "serial_number": "SPDK0", 00:30:33.538 "firmware_revision": "25.01", 00:30:33.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.538 "oacs": { 00:30:33.538 "security": 0, 00:30:33.538 "format": 0, 00:30:33.538 "firmware": 0, 00:30:33.538 "ns_manage": 0 00:30:33.538 }, 00:30:33.538 "multi_ctrlr": true, 00:30:33.538 "ana_reporting": false 00:30:33.538 }, 00:30:33.538 "vs": { 00:30:33.538 "nvme_version": "1.3" 00:30:33.538 }, 00:30:33.538 "ns_data": { 00:30:33.538 "id": 1, 00:30:33.538 "can_share": true 00:30:33.538 } 00:30:33.538 } 00:30:33.538 ], 00:30:33.538 "mp_policy": "active_passive" 00:30:33.538 } 00:30:33.538 } 00:30:33.538 ] 00:30:33.538 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1257106 00:30:33.538 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:33.539 19:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.539 Running I/O for 10 seconds... 00:30:34.914 Latency(us) 00:30:34.914 [2024-12-06T18:28:45.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.914 Nvme0n1 : 1.00 13707.00 53.54 0.00 0.00 0.00 0.00 0.00 00:30:34.914 [2024-12-06T18:28:45.491Z] =================================================================================================================== 00:30:34.914 [2024-12-06T18:28:45.491Z] Total : 13707.00 53.54 0.00 0.00 0.00 0.00 0.00 00:30:34.914 00:30:35.477 19:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:35.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.734 Nvme0n1 : 2.00 13717.50 53.58 0.00 0.00 0.00 0.00 0.00 00:30:35.734 [2024-12-06T18:28:46.311Z] =================================================================================================================== 00:30:35.734 [2024-12-06T18:28:46.311Z] Total : 13717.50 53.58 0.00 0.00 0.00 0.00 0.00 00:30:35.734 00:30:35.734 true 00:30:35.734 19:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:35.734 19:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:36.298 19:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:36.298 19:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:36.298 19:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1257106 00:30:36.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.555 Nvme0n1 : 3.00 13785.00 53.85 0.00 0.00 0.00 0.00 0.00 00:30:36.555 [2024-12-06T18:28:47.132Z] =================================================================================================================== 00:30:36.555 [2024-12-06T18:28:47.132Z] Total : 13785.00 53.85 0.00 0.00 0.00 0.00 0.00 00:30:36.555 00:30:37.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.929 Nvme0n1 : 4.00 13830.75 54.03 0.00 0.00 0.00 0.00 0.00 00:30:37.929 [2024-12-06T18:28:48.506Z] =================================================================================================================== 00:30:37.929 [2024-12-06T18:28:48.506Z] Total : 13830.75 54.03 0.00 0.00 0.00 0.00 0.00 00:30:37.929 00:30:38.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.863 Nvme0n1 : 5.00 13871.00 54.18 0.00 0.00 0.00 0.00 0.00 00:30:38.863 [2024-12-06T18:28:49.440Z] =================================================================================================================== 00:30:38.863 [2024-12-06T18:28:49.440Z] Total : 13871.00 54.18 0.00 0.00 0.00 0.00 0.00 00:30:38.863 00:30:39.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.801 Nvme0n1 : 6.00 13884.50 54.24 0.00 0.00 0.00 0.00 0.00 00:30:39.801 [2024-12-06T18:28:50.378Z] =================================================================================================================== 00:30:39.801 [2024-12-06T18:28:50.378Z] Total : 13884.50 54.24 0.00 0.00 0.00 0.00 0.00 00:30:39.801 00:30:40.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:40.798 Nvme0n1 : 7.00 13889.57 54.26 0.00 0.00 0.00 0.00 0.00 00:30:40.798 [2024-12-06T18:28:51.375Z] =================================================================================================================== 00:30:40.798 [2024-12-06T18:28:51.375Z] Total : 13889.57 54.26 0.00 0.00 0.00 0.00 0.00 00:30:40.798 00:30:41.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:41.731 Nvme0n1 : 8.00 13907.38 54.33 0.00 0.00 0.00 0.00 0.00 00:30:41.731 [2024-12-06T18:28:52.308Z] =================================================================================================================== 00:30:41.731 [2024-12-06T18:28:52.308Z] Total : 13907.38 54.33 0.00 0.00 0.00 0.00 0.00 00:30:41.731 00:30:42.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.663 Nvme0n1 : 9.00 13887.44 54.25 0.00 0.00 0.00 0.00 0.00 00:30:42.663 [2024-12-06T18:28:53.240Z] =================================================================================================================== 00:30:42.663 [2024-12-06T18:28:53.240Z] Total : 13887.44 54.25 0.00 0.00 0.00 0.00 0.00 00:30:42.663 00:30:43.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.595 Nvme0n1 : 10.00 13906.70 54.32 0.00 0.00 0.00 0.00 0.00 00:30:43.595 [2024-12-06T18:28:54.172Z] =================================================================================================================== 00:30:43.595 [2024-12-06T18:28:54.172Z] Total : 13906.70 54.32 0.00 0.00 0.00 0.00 0.00 00:30:43.595 00:30:43.595 00:30:43.595 Latency(us) 00:30:43.595 [2024-12-06T18:28:54.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.595 Nvme0n1 : 10.01 13907.75 54.33 0.00 0.00 9195.93 2378.71 12233.39 00:30:43.595 [2024-12-06T18:28:54.172Z] =================================================================================================================== 00:30:43.595 [2024-12-06T18:28:54.172Z] Total : 13907.75 54.33 0.00 0.00 9195.93 2378.71 12233.39 00:30:43.595 { 00:30:43.595 "results": [ 00:30:43.595 { 00:30:43.595 "job": "Nvme0n1", 00:30:43.595 "core_mask": "0x2", 00:30:43.595 "workload": "randwrite", 00:30:43.595 "status": "finished", 00:30:43.595 "queue_depth": 128, 00:30:43.595 "io_size": 4096, 00:30:43.595 "runtime": 10.008448, 00:30:43.595 "iops": 13907.750732181454, 00:30:43.595 "mibps": 54.327151297583804, 00:30:43.595 "io_failed": 0, 00:30:43.595 "io_timeout": 0, 00:30:43.595 "avg_latency_us": 9195.92847580466, 00:30:43.595 "min_latency_us": 2378.7140740740742, 00:30:43.595 "max_latency_us": 12233.386666666667 00:30:43.595 } 00:30:43.595 ], 00:30:43.595 "core_count": 1 00:30:43.595 } 00:30:43.595 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1256971 00:30:43.595 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1256971 ']' 00:30:43.596 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1256971 00:30:43.596 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:43.596 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.596 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1256971 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1256971' 00:30:43.853 killing process with pid 1256971 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1256971 00:30:43.853 Received shutdown signal, test time was about 10.000000 seconds 00:30:43.853 00:30:43.853 Latency(us) 00:30:43.853 [2024-12-06T18:28:54.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.853 [2024-12-06T18:28:54.430Z] =================================================================================================================== 00:30:43.853 [2024-12-06T18:28:54.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1256971 00:30:43.853 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.110 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.674 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:44.674 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:44.674 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:44.674 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:44.674 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1254373 00:30:44.674 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1254373 00:30:44.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1254373 Killed "${NVMF_APP[@]}" "$@" 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1258421 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1258421 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1258421 ']' 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.931 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:44.931 [2024-12-06 19:28:55.306585] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.931 [2024-12-06 19:28:55.307736] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:44.931 [2024-12-06 19:28:55.307797] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.931 [2024-12-06 19:28:55.380651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.931 [2024-12-06 19:28:55.438903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.931 [2024-12-06 19:28:55.439011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.931 [2024-12-06 19:28:55.439025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.931 [2024-12-06 19:28:55.439036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.931 [2024-12-06 19:28:55.439045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.931 [2024-12-06 19:28:55.439624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.188 [2024-12-06 19:28:55.537809] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.188 [2024-12-06 19:28:55.538122] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.188 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:45.446 [2024-12-06 19:28:55.842516] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:45.446 [2024-12-06 19:28:55.842705] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:45.446 [2024-12-06 19:28:55.842773] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:45.446 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:45.703 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1242cf92-d9fc-4ad9-9341-24379a3ea532 -t 2000 00:30:45.960 [ 00:30:45.960 { 00:30:45.960 "name": "1242cf92-d9fc-4ad9-9341-24379a3ea532", 00:30:45.960 "aliases": [ 00:30:45.960 "lvs/lvol" 00:30:45.960 ], 00:30:45.960 "product_name": "Logical Volume", 00:30:45.960 "block_size": 4096, 00:30:45.960 "num_blocks": 38912, 00:30:45.960 "uuid": "1242cf92-d9fc-4ad9-9341-24379a3ea532", 00:30:45.960 "assigned_rate_limits": { 00:30:45.960 "rw_ios_per_sec": 0, 00:30:45.960 "rw_mbytes_per_sec": 0, 00:30:45.960 "r_mbytes_per_sec": 0, 00:30:45.960 "w_mbytes_per_sec": 0 00:30:45.960 }, 00:30:45.960 "claimed": false, 00:30:45.960 "zoned": false, 00:30:45.960 "supported_io_types": { 00:30:45.960 "read": true, 00:30:45.960 "write": true, 00:30:45.960 "unmap": true, 00:30:45.960 "flush": false, 00:30:45.960 "reset": true, 00:30:45.960 "nvme_admin": false, 00:30:45.960 "nvme_io": false, 00:30:45.960 "nvme_io_md": false, 00:30:45.960 "write_zeroes": true, 00:30:45.960 "zcopy": false, 00:30:45.960 "get_zone_info": false, 00:30:45.961 "zone_management": false, 00:30:45.961 "zone_append": false, 00:30:45.961 "compare": false, 00:30:45.961 "compare_and_write": false, 00:30:45.961 "abort": false, 00:30:45.961 "seek_hole": true, 00:30:45.961 "seek_data": true, 00:30:45.961 "copy": false, 00:30:45.961 "nvme_iov_md": false 00:30:45.961 }, 00:30:45.961 "driver_specific": { 00:30:45.961 "lvol": { 00:30:45.961 "lvol_store_uuid": "e080aa7a-fb67-46bd-8ba6-d592e6702f2b", 00:30:45.961 "base_bdev": "aio_bdev", 00:30:45.961 "thin_provision": false, 00:30:45.961 "num_allocated_clusters": 38, 00:30:45.961 "snapshot": false, 00:30:45.961 "clone": false, 00:30:45.961 "esnap_clone": false 00:30:45.961 } 00:30:45.961 } 00:30:45.961 } 00:30:45.961 ] 00:30:45.961 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:45.961 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:45.961 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:46.218 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:46.218 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:46.218 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:46.477 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:46.477 19:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:46.735 [2024-12-06 19:28:57.228242] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:46.735 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:46.993 request: 00:30:46.993 { 00:30:46.993 "uuid": "e080aa7a-fb67-46bd-8ba6-d592e6702f2b", 00:30:46.993 "method": "bdev_lvol_get_lvstores", 00:30:46.993 "req_id": 1 00:30:46.993 } 00:30:46.993 Got JSON-RPC error response 00:30:46.993 response: 00:30:46.993 { 00:30:46.993 "code": -19, 00:30:46.993 "message": "No such device" 00:30:46.993 } 00:30:46.993 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:46.993 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:46.993 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:46.993 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:46.993 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:47.251 aio_bdev 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:47.252 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:47.819 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1242cf92-d9fc-4ad9-9341-24379a3ea532 -t 2000 00:30:47.819 [ 00:30:47.819 { 00:30:47.819 "name": "1242cf92-d9fc-4ad9-9341-24379a3ea532", 00:30:47.819 "aliases": [ 00:30:47.819 "lvs/lvol" 00:30:47.819 ], 00:30:47.819 "product_name": "Logical Volume", 00:30:47.819 "block_size": 4096, 00:30:47.819 "num_blocks": 38912, 00:30:47.819 "uuid": "1242cf92-d9fc-4ad9-9341-24379a3ea532", 00:30:47.819 "assigned_rate_limits": { 00:30:47.819 "rw_ios_per_sec": 0, 00:30:47.819 "rw_mbytes_per_sec": 0, 00:30:47.819 "r_mbytes_per_sec": 0, 00:30:47.819 "w_mbytes_per_sec": 0 00:30:47.819 }, 00:30:47.819 "claimed": false, 00:30:47.819 "zoned": false, 00:30:47.819 "supported_io_types": { 00:30:47.819 "read": true, 00:30:47.819 "write": true, 00:30:47.819 "unmap": true, 00:30:47.819 "flush": false, 00:30:47.819 "reset": true, 00:30:47.819 "nvme_admin": false, 00:30:47.819 "nvme_io": false, 00:30:47.819 "nvme_io_md": false, 00:30:47.819 "write_zeroes": true, 00:30:47.819 "zcopy": false, 00:30:47.819 "get_zone_info": false, 00:30:47.819 "zone_management": false, 00:30:47.819 "zone_append": false, 00:30:47.819 "compare": false, 00:30:47.819 "compare_and_write": false, 00:30:47.819 "abort": false, 00:30:47.819 "seek_hole": true, 00:30:47.819 "seek_data": true, 00:30:47.819 "copy": false, 00:30:47.819 "nvme_iov_md": false 00:30:47.819 }, 00:30:47.819 "driver_specific": { 00:30:47.819 "lvol": { 00:30:47.819 "lvol_store_uuid": "e080aa7a-fb67-46bd-8ba6-d592e6702f2b", 00:30:47.819 "base_bdev": "aio_bdev", 00:30:47.819 "thin_provision": false, 00:30:47.819 "num_allocated_clusters": 38, 00:30:47.819 "snapshot": false, 00:30:47.819 "clone": false, 00:30:47.819 "esnap_clone": false 00:30:47.819 } 00:30:47.819 } 00:30:47.819 } 00:30:47.819 ] 00:30:47.819 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:47.819 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:47.819 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:48.386 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:48.386 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:48.386 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:48.386 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:48.386 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1242cf92-d9fc-4ad9-9341-24379a3ea532 00:30:48.644 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e080aa7a-fb67-46bd-8ba6-d592e6702f2b 00:30:49.211 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:49.211 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:49.470 00:30:49.470 real 0m19.600s 00:30:49.470 user 0m36.566s 00:30:49.470 sys 0m4.834s 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:49.470 ************************************ 00:30:49.470 END TEST lvs_grow_dirty 00:30:49.470 ************************************ 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:49.470 nvmf_trace.0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.470 rmmod nvme_tcp 00:30:49.470 rmmod nvme_fabrics 00:30:49.470 rmmod nvme_keyring 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1258421 ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1258421 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1258421 ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1258421 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258421 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258421' 00:30:49.470 killing process with pid 1258421 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1258421 00:30:49.470 19:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1258421 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.729 19:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.262 00:30:52.262 real 0m43.216s 00:30:52.262 user 0m55.648s 00:30:52.262 sys 0m8.957s 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:52.262 ************************************ 00:30:52.262 END TEST nvmf_lvs_grow 00:30:52.262 ************************************ 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.262 ************************************ 00:30:52.262 START TEST nvmf_bdev_io_wait 00:30:52.262 ************************************ 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:52.262 * Looking for test storage... 00:30:52.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.262 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.263 --rc genhtml_branch_coverage=1 00:30:52.263 --rc genhtml_function_coverage=1 00:30:52.263 --rc genhtml_legend=1 00:30:52.263 --rc geninfo_all_blocks=1 00:30:52.263 --rc geninfo_unexecuted_blocks=1 00:30:52.263 00:30:52.263 ' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.263 --rc genhtml_branch_coverage=1 00:30:52.263 --rc genhtml_function_coverage=1 00:30:52.263 --rc genhtml_legend=1 00:30:52.263 --rc geninfo_all_blocks=1 00:30:52.263 --rc geninfo_unexecuted_blocks=1 00:30:52.263 00:30:52.263 ' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.263 --rc genhtml_branch_coverage=1 00:30:52.263 --rc genhtml_function_coverage=1 00:30:52.263 --rc genhtml_legend=1 00:30:52.263 --rc geninfo_all_blocks=1 00:30:52.263 --rc geninfo_unexecuted_blocks=1 00:30:52.263 00:30:52.263 ' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.263 --rc genhtml_branch_coverage=1 00:30:52.263 --rc genhtml_function_coverage=1 00:30:52.263 --rc genhtml_legend=1 00:30:52.263 --rc geninfo_all_blocks=1 00:30:52.263 --rc geninfo_unexecuted_blocks=1 00:30:52.263 00:30:52.263 ' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.263 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.264 19:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:54.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:54.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:54.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.167 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:54.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:30:54.168 00:30:54.168 --- 10.0.0.2 ping statistics --- 00:30:54.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.168 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:30:54.168 00:30:54.168 --- 10.0.0.1 ping statistics --- 00:30:54.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.168 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1260956 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1260956 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1260956 ']' 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.168 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.168 [2024-12-06 19:29:04.695610] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.168 [2024-12-06 19:29:04.696722] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:54.168 [2024-12-06 19:29:04.696787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.426 [2024-12-06 19:29:04.771369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:54.426 [2024-12-06 19:29:04.830304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.427 [2024-12-06 19:29:04.830373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.427 [2024-12-06 19:29:04.830387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.427 [2024-12-06 19:29:04.830413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.427 [2024-12-06 19:29:04.830422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.427 [2024-12-06 19:29:04.831926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.427 [2024-12-06 19:29:04.832033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.427 [2024-12-06 19:29:04.832122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.427 [2024-12-06 19:29:04.832125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.427 [2024-12-06 19:29:04.832564] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.427 19:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 [2024-12-06 19:29:05.014930] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.689 [2024-12-06 19:29:05.015168] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:54.689 [2024-12-06 19:29:05.016116] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:54.689 [2024-12-06 19:29:05.017023] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 [2024-12-06 19:29:05.024759] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 Malloc0 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.689 [2024-12-06 19:29:05.080972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1261033 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1261036 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.689 { 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme$subsystem", 00:30:54.689 "trtype": "$TEST_TRANSPORT", 00:30:54.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "$NVMF_PORT", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.689 "hdgst": ${hdgst:-false}, 00:30:54.689 "ddgst": ${ddgst:-false} 00:30:54.689 }, 00:30:54.689 "method": "bdev_nvme_attach_controller" 00:30:54.689 } 00:30:54.689 EOF 00:30:54.689 )") 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1261039 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.689 { 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme$subsystem", 00:30:54.689 "trtype": "$TEST_TRANSPORT", 00:30:54.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "$NVMF_PORT", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.689 "hdgst": ${hdgst:-false}, 00:30:54.689 "ddgst": ${ddgst:-false} 00:30:54.689 }, 00:30:54.689 "method": "bdev_nvme_attach_controller" 00:30:54.689 } 00:30:54.689 EOF 00:30:54.689 )") 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1261043 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.689 { 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme$subsystem", 00:30:54.689 "trtype": "$TEST_TRANSPORT", 00:30:54.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "$NVMF_PORT", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.689 "hdgst": ${hdgst:-false}, 00:30:54.689 "ddgst": ${ddgst:-false} 00:30:54.689 }, 00:30:54.689 "method": "bdev_nvme_attach_controller" 00:30:54.689 } 00:30:54.689 EOF 00:30:54.689 )") 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.689 { 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme$subsystem", 00:30:54.689 "trtype": "$TEST_TRANSPORT", 00:30:54.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "$NVMF_PORT", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.689 "hdgst": ${hdgst:-false}, 00:30:54.689 "ddgst": ${ddgst:-false} 00:30:54.689 }, 00:30:54.689 "method": "bdev_nvme_attach_controller" 00:30:54.689 } 00:30:54.689 EOF 00:30:54.689 )") 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1261033 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme1", 00:30:54.689 "trtype": "tcp", 00:30:54.689 "traddr": "10.0.0.2", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "4420", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.689 "hdgst": false, 00:30:54.689 "ddgst": false 00:30:54.689 }, 00:30:54.689 "method": "bdev_nvme_attach_controller" 00:30:54.689 }' 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:54.689 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.689 "params": { 00:30:54.689 "name": "Nvme1", 00:30:54.689 "trtype": "tcp", 00:30:54.689 "traddr": "10.0.0.2", 00:30:54.689 "adrfam": "ipv4", 00:30:54.689 "trsvcid": "4420", 00:30:54.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.690 "hdgst": false, 00:30:54.690 "ddgst": false 00:30:54.690 }, 00:30:54.690 "method": "bdev_nvme_attach_controller" 00:30:54.690 }' 00:30:54.690 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:54.690 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.690 "params": { 00:30:54.690 "name": "Nvme1", 00:30:54.690 "trtype": "tcp", 00:30:54.690 "traddr": "10.0.0.2", 00:30:54.690 "adrfam": "ipv4", 00:30:54.690 "trsvcid": "4420", 00:30:54.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.690 "hdgst": false, 00:30:54.690 "ddgst": false 00:30:54.690 }, 00:30:54.690 "method": "bdev_nvme_attach_controller" 00:30:54.690 }' 00:30:54.690 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:54.690 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.690 "params": { 00:30:54.690 "name": "Nvme1", 00:30:54.690 "trtype": "tcp", 00:30:54.690 "traddr": "10.0.0.2", 00:30:54.690 "adrfam": "ipv4", 00:30:54.690 "trsvcid": "4420", 00:30:54.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.690 "hdgst": false, 00:30:54.690 "ddgst": false 00:30:54.690 }, 00:30:54.690 "method": "bdev_nvme_attach_controller" 00:30:54.690 }' 00:30:54.690 [2024-12-06 19:29:05.130761] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:54.690 [2024-12-06 19:29:05.130761] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:54.690 [2024-12-06 19:29:05.130854] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:29:05.130854] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:54.690 --proc-type=auto ] 00:30:54.690 [2024-12-06 19:29:05.131124] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:54.690 [2024-12-06 19:29:05.131205] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:54.690 [2024-12-06 19:29:05.131293] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:54.690 [2024-12-06 19:29:05.131351] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:54.949 [2024-12-06 19:29:05.313555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.949 [2024-12-06 19:29:05.367101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:54.949 [2024-12-06 19:29:05.411852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.949 [2024-12-06 19:29:05.465183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:54.949 [2024-12-06 19:29:05.510122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.206 [2024-12-06 19:29:05.567085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:55.206 [2024-12-06 19:29:05.585140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.206 [2024-12-06 19:29:05.635886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:55.206 Running I/O for 1 seconds... 00:30:55.206 Running I/O for 1 seconds... 00:30:55.206 Running I/O for 1 seconds... 00:30:55.463 Running I/O for 1 seconds... 00:30:56.397 10398.00 IOPS, 40.62 MiB/s 00:30:56.397 Latency(us) 00:30:56.397 [2024-12-06T18:29:06.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.397 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:56.397 Nvme1n1 : 1.01 10461.56 40.87 0.00 0.00 12187.93 4660.34 14757.74 00:30:56.397 [2024-12-06T18:29:06.974Z] =================================================================================================================== 00:30:56.397 [2024-12-06T18:29:06.974Z] Total : 10461.56 40.87 0.00 0.00 12187.93 4660.34 14757.74 00:30:56.397 9699.00 IOPS, 37.89 MiB/s [2024-12-06T18:29:06.974Z] 8249.00 IOPS, 32.22 MiB/s 00:30:56.397 Latency(us) 00:30:56.397 [2024-12-06T18:29:06.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.397 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:56.397 Nvme1n1 : 1.01 9771.42 38.17 0.00 0.00 13049.99 2463.67 19029.71 00:30:56.397 [2024-12-06T18:29:06.974Z] =================================================================================================================== 00:30:56.397 [2024-12-06T18:29:06.974Z] Total : 9771.42 38.17 0.00 0.00 13049.99 2463.67 19029.71 00:30:56.397 00:30:56.397 Latency(us) 00:30:56.397 [2024-12-06T18:29:06.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.397 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:56.397 Nvme1n1 : 1.01 8311.94 32.47 0.00 0.00 15333.96 5946.79 21942.42 00:30:56.397 [2024-12-06T18:29:06.974Z] =================================================================================================================== 00:30:56.397 [2024-12-06T18:29:06.974Z] Total : 8311.94 32.47 0.00 0.00 15333.96 5946.79 21942.42 00:30:56.397 171768.00 IOPS, 670.97 MiB/s 00:30:56.397 Latency(us) 00:30:56.397 [2024-12-06T18:29:06.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.397 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:56.397 Nvme1n1 : 1.00 171445.80 669.71 0.00 0.00 742.51 288.24 1844.72 00:30:56.397 [2024-12-06T18:29:06.974Z] =================================================================================================================== 00:30:56.397 [2024-12-06T18:29:06.974Z] Total : 171445.80 669.71 0.00 0.00 742.51 288.24 1844.72 00:30:56.397 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1261036 00:30:56.397 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1261039 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1261043 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.655 rmmod nvme_tcp 00:30:56.655 rmmod nvme_fabrics 00:30:56.655 rmmod nvme_keyring 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1260956 ']' 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1260956 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1260956 ']' 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1260956 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1260956 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1260956' 00:30:56.655 killing process with pid 1260956 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1260956 00:30:56.655 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1260956 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:56.913 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.914 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.455 00:30:59.455 real 0m7.134s 00:30:59.455 user 0m13.865s 00:30:59.455 sys 0m4.148s 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:59.455 ************************************ 00:30:59.455 END TEST nvmf_bdev_io_wait 00:30:59.455 ************************************ 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.455 ************************************ 00:30:59.455 START TEST nvmf_queue_depth 00:30:59.455 ************************************ 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:59.455 * Looking for test storage... 00:30:59.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:59.455 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.456 --rc genhtml_branch_coverage=1 00:30:59.456 --rc genhtml_function_coverage=1 00:30:59.456 --rc genhtml_legend=1 00:30:59.456 --rc geninfo_all_blocks=1 00:30:59.456 --rc geninfo_unexecuted_blocks=1 00:30:59.456 00:30:59.456 ' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.456 --rc genhtml_branch_coverage=1 00:30:59.456 --rc genhtml_function_coverage=1 00:30:59.456 --rc genhtml_legend=1 00:30:59.456 --rc geninfo_all_blocks=1 00:30:59.456 --rc geninfo_unexecuted_blocks=1 00:30:59.456 00:30:59.456 ' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.456 --rc genhtml_branch_coverage=1 00:30:59.456 --rc genhtml_function_coverage=1 00:30:59.456 --rc genhtml_legend=1 00:30:59.456 --rc geninfo_all_blocks=1 00:30:59.456 --rc geninfo_unexecuted_blocks=1 00:30:59.456 00:30:59.456 ' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.456 --rc genhtml_branch_coverage=1 00:30:59.456 --rc genhtml_function_coverage=1 00:30:59.456 --rc genhtml_legend=1 00:30:59.456 --rc geninfo_all_blocks=1 00:30:59.456 --rc geninfo_unexecuted_blocks=1 00:30:59.456 00:30:59.456 ' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.456 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.457 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.358 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.359 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:31:01.360 00:31:01.360 --- 10.0.0.2 ping statistics --- 00:31:01.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.360 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:01.360 00:31:01.360 --- 10.0.0.1 ping statistics --- 00:31:01.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.360 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1263209 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1263209 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1263209 ']' 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.360 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.618 [2024-12-06 19:29:11.961461] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:01.618 [2024-12-06 19:29:11.962564] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:01.618 [2024-12-06 19:29:11.962643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.618 [2024-12-06 19:29:12.040531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.618 [2024-12-06 19:29:12.095290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.618 [2024-12-06 19:29:12.095352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.618 [2024-12-06 19:29:12.095382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.618 [2024-12-06 19:29:12.095393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.618 [2024-12-06 19:29:12.095402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.618 [2024-12-06 19:29:12.096024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.618 [2024-12-06 19:29:12.180656] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:01.618 [2024-12-06 19:29:12.180982] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 [2024-12-06 19:29:12.236601] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 Malloc0 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 [2024-12-06 19:29:12.296735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1263337 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1263337 /var/tmp/bdevperf.sock 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1263337 ']' 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.877 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.878 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.878 [2024-12-06 19:29:12.344459] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:01.878 [2024-12-06 19:29:12.344523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263337 ] 00:31:01.878 [2024-12-06 19:29:12.410584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.136 [2024-12-06 19:29:12.470118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.136 NVMe0n1 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.136 19:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:02.409 Running I/O for 10 seconds... 00:31:04.274 8192.00 IOPS, 32.00 MiB/s [2024-12-06T18:29:16.225Z] 8646.50 IOPS, 33.78 MiB/s [2024-12-06T18:29:17.159Z] 8540.00 IOPS, 33.36 MiB/s [2024-12-06T18:29:18.094Z] 8702.75 IOPS, 34.00 MiB/s [2024-12-06T18:29:19.028Z] 8802.00 IOPS, 34.38 MiB/s [2024-12-06T18:29:19.972Z] 8836.67 IOPS, 34.52 MiB/s [2024-12-06T18:29:20.983Z] 8809.86 IOPS, 34.41 MiB/s [2024-12-06T18:29:21.918Z] 8832.88 IOPS, 34.50 MiB/s [2024-12-06T18:29:22.853Z] 8866.67 IOPS, 34.64 MiB/s [2024-12-06T18:29:23.111Z] 8878.70 IOPS, 34.68 MiB/s 00:31:12.534 Latency(us) 00:31:12.534 [2024-12-06T18:29:23.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:12.534 Verification LBA range: start 0x0 length 0x4000 00:31:12.534 NVMe0n1 : 10.13 8858.02 34.60 0.00 0.00 114573.49 22136.60 73011.96 00:31:12.534 [2024-12-06T18:29:23.111Z] =================================================================================================================== 00:31:12.534 [2024-12-06T18:29:23.111Z] Total : 8858.02 34.60 0.00 0.00 114573.49 22136.60 73011.96 00:31:12.534 { 00:31:12.534 "results": [ 00:31:12.534 { 00:31:12.534 "job": "NVMe0n1", 00:31:12.534 "core_mask": "0x1", 00:31:12.534 "workload": "verify", 00:31:12.534 "status": "finished", 00:31:12.534 "verify_range": { 00:31:12.534 "start": 0, 00:31:12.534 "length": 16384 00:31:12.534 }, 00:31:12.534 "queue_depth": 1024, 00:31:12.534 "io_size": 4096, 00:31:12.534 "runtime": 10.130025, 00:31:12.534 "iops": 8858.023548806641, 00:31:12.534 "mibps": 34.60165448752594, 00:31:12.534 "io_failed": 0, 00:31:12.534 "io_timeout": 0, 00:31:12.534 "avg_latency_us": 114573.49315449627, 00:31:12.534 "min_latency_us": 22136.604444444445, 00:31:12.534 "max_latency_us": 73011.95851851851 00:31:12.534 } 00:31:12.534 ], 00:31:12.534 "core_count": 1 00:31:12.534 } 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1263337 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1263337 ']' 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1263337 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.534 19:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263337 00:31:12.534 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.534 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.534 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263337' 00:31:12.534 killing process with pid 1263337 00:31:12.534 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1263337 00:31:12.534 Received shutdown signal, test time was about 10.000000 seconds 00:31:12.534 00:31:12.534 Latency(us) 00:31:12.534 [2024-12-06T18:29:23.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.534 [2024-12-06T18:29:23.111Z] =================================================================================================================== 00:31:12.534 [2024-12-06T18:29:23.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.534 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1263337 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.792 rmmod nvme_tcp 00:31:12.792 rmmod nvme_fabrics 00:31:12.792 rmmod nvme_keyring 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1263209 ']' 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1263209 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1263209 ']' 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1263209 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.792 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263209 00:31:12.793 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.793 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.793 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263209' 00:31:12.793 killing process with pid 1263209 00:31:12.793 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1263209 00:31:12.793 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1263209 00:31:13.050 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.050 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.050 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.050 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.051 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.586 00:31:15.586 real 0m16.163s 00:31:15.586 user 0m22.253s 00:31:15.586 sys 0m3.416s 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:15.586 ************************************ 00:31:15.586 END TEST nvmf_queue_depth 00:31:15.586 ************************************ 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.586 ************************************ 00:31:15.586 START TEST nvmf_target_multipath 00:31:15.586 ************************************ 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:15.586 * Looking for test storage... 00:31:15.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.586 --rc genhtml_branch_coverage=1 00:31:15.586 --rc genhtml_function_coverage=1 00:31:15.586 --rc genhtml_legend=1 00:31:15.586 --rc geninfo_all_blocks=1 00:31:15.586 --rc geninfo_unexecuted_blocks=1 00:31:15.586 00:31:15.586 ' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.586 --rc genhtml_branch_coverage=1 00:31:15.586 --rc genhtml_function_coverage=1 00:31:15.586 --rc genhtml_legend=1 00:31:15.586 --rc geninfo_all_blocks=1 00:31:15.586 --rc geninfo_unexecuted_blocks=1 00:31:15.586 00:31:15.586 ' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.586 --rc genhtml_branch_coverage=1 00:31:15.586 --rc genhtml_function_coverage=1 00:31:15.586 --rc genhtml_legend=1 00:31:15.586 --rc geninfo_all_blocks=1 00:31:15.586 --rc geninfo_unexecuted_blocks=1 00:31:15.586 00:31:15.586 ' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.586 --rc genhtml_branch_coverage=1 00:31:15.586 --rc genhtml_function_coverage=1 00:31:15.586 --rc genhtml_legend=1 00:31:15.586 --rc geninfo_all_blocks=1 00:31:15.586 --rc geninfo_unexecuted_blocks=1 00:31:15.586 00:31:15.586 ' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.586 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.587 19:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:17.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:17.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:17.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.488 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:17.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:31:17.489 00:31:17.489 --- 10.0.0.2 ping statistics --- 00:31:17.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.489 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:17.489 00:31:17.489 --- 10.0.0.1 ping statistics --- 00:31:17.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.489 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.489 19:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:17.489 only one NIC for nvmf test 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.489 rmmod nvme_tcp 00:31:17.489 rmmod nvme_fabrics 00:31:17.489 rmmod nvme_keyring 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.489 19:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.025 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.026 00:31:20.026 real 0m4.434s 00:31:20.026 user 0m0.854s 00:31:20.026 sys 0m1.574s 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:20.026 ************************************ 00:31:20.026 END TEST nvmf_target_multipath 00:31:20.026 ************************************ 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.026 ************************************ 00:31:20.026 START TEST nvmf_zcopy 00:31:20.026 ************************************ 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:20.026 * Looking for test storage... 00:31:20.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.026 --rc genhtml_branch_coverage=1 00:31:20.026 --rc genhtml_function_coverage=1 00:31:20.026 --rc genhtml_legend=1 00:31:20.026 --rc geninfo_all_blocks=1 00:31:20.026 --rc geninfo_unexecuted_blocks=1 00:31:20.026 00:31:20.026 ' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.026 --rc genhtml_branch_coverage=1 00:31:20.026 --rc genhtml_function_coverage=1 00:31:20.026 --rc genhtml_legend=1 00:31:20.026 --rc geninfo_all_blocks=1 00:31:20.026 --rc geninfo_unexecuted_blocks=1 00:31:20.026 00:31:20.026 ' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.026 --rc genhtml_branch_coverage=1 00:31:20.026 --rc genhtml_function_coverage=1 00:31:20.026 --rc genhtml_legend=1 00:31:20.026 --rc geninfo_all_blocks=1 00:31:20.026 --rc geninfo_unexecuted_blocks=1 00:31:20.026 00:31:20.026 ' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.026 --rc genhtml_branch_coverage=1 00:31:20.026 --rc genhtml_function_coverage=1 00:31:20.026 --rc genhtml_legend=1 00:31:20.026 --rc geninfo_all_blocks=1 00:31:20.026 --rc geninfo_unexecuted_blocks=1 00:31:20.026 00:31:20.026 ' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.026 19:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.926 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:22.185 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:22.185 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:22.185 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:31:22.185 00:31:22.185 --- 10.0.0.2 ping statistics --- 00:31:22.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.185 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:31:22.185 00:31:22.185 --- 10.0.0.1 ping statistics --- 00:31:22.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.185 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1268516 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:22.185 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1268516 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1268516 ']' 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.186 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.186 [2024-12-06 19:29:32.734563] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.186 [2024-12-06 19:29:32.735748] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:22.186 [2024-12-06 19:29:32.735808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.444 [2024-12-06 19:29:32.808307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.444 [2024-12-06 19:29:32.866193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.444 [2024-12-06 19:29:32.866269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.444 [2024-12-06 19:29:32.866282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.444 [2024-12-06 19:29:32.866293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.444 [2024-12-06 19:29:32.866302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.444 [2024-12-06 19:29:32.866955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.444 [2024-12-06 19:29:32.964422] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.444 [2024-12-06 19:29:32.964720] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:22.444 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.444 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:22.444 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.444 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.444 19:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.444 [2024-12-06 19:29:33.015622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.444 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.703 [2024-12-06 19:29:33.031847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.703 malloc0 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.703 { 00:31:22.703 "params": { 00:31:22.703 "name": "Nvme$subsystem", 00:31:22.703 "trtype": "$TEST_TRANSPORT", 00:31:22.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.703 "adrfam": "ipv4", 00:31:22.703 "trsvcid": "$NVMF_PORT", 00:31:22.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.703 "hdgst": ${hdgst:-false}, 00:31:22.703 "ddgst": ${ddgst:-false} 00:31:22.703 }, 00:31:22.703 "method": "bdev_nvme_attach_controller" 00:31:22.703 } 00:31:22.703 EOF 00:31:22.703 )") 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:22.703 19:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.703 "params": { 00:31:22.703 "name": "Nvme1", 00:31:22.703 "trtype": "tcp", 00:31:22.703 "traddr": "10.0.0.2", 00:31:22.703 "adrfam": "ipv4", 00:31:22.703 "trsvcid": "4420", 00:31:22.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.703 "hdgst": false, 00:31:22.703 "ddgst": false 00:31:22.703 }, 00:31:22.703 "method": "bdev_nvme_attach_controller" 00:31:22.703 }' 00:31:22.703 [2024-12-06 19:29:33.118285] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:22.703 [2024-12-06 19:29:33.118366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268543 ] 00:31:22.703 [2024-12-06 19:29:33.190439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.703 [2024-12-06 19:29:33.247273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.267 Running I/O for 10 seconds... 00:31:25.134 5737.00 IOPS, 44.82 MiB/s [2024-12-06T18:29:36.643Z] 5768.50 IOPS, 45.07 MiB/s [2024-12-06T18:29:38.014Z] 5762.00 IOPS, 45.02 MiB/s [2024-12-06T18:29:38.947Z] 5771.50 IOPS, 45.09 MiB/s [2024-12-06T18:29:39.880Z] 5776.40 IOPS, 45.13 MiB/s [2024-12-06T18:29:40.816Z] 5781.67 IOPS, 45.17 MiB/s [2024-12-06T18:29:41.751Z] 5783.29 IOPS, 45.18 MiB/s [2024-12-06T18:29:42.685Z] 5787.00 IOPS, 45.21 MiB/s [2024-12-06T18:29:43.620Z] 5788.11 IOPS, 45.22 MiB/s [2024-12-06T18:29:43.878Z] 5792.10 IOPS, 45.25 MiB/s 00:31:33.301 Latency(us) 00:31:33.301 [2024-12-06T18:29:43.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.301 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:33.301 Verification LBA range: start 0x0 length 0x1000 00:31:33.301 Nvme1n1 : 10.01 5795.99 45.28 0.00 0.00 22024.49 430.84 29127.11 00:31:33.301 [2024-12-06T18:29:43.878Z] =================================================================================================================== 00:31:33.301 [2024-12-06T18:29:43.878Z] Total : 5795.99 45.28 0.00 0.00 22024.49 430.84 29127.11 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1269729 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:33.301 { 00:31:33.301 "params": { 00:31:33.301 "name": "Nvme$subsystem", 00:31:33.301 "trtype": "$TEST_TRANSPORT", 00:31:33.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.301 "adrfam": "ipv4", 00:31:33.301 "trsvcid": "$NVMF_PORT", 00:31:33.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.301 "hdgst": ${hdgst:-false}, 00:31:33.301 "ddgst": ${ddgst:-false} 00:31:33.301 }, 00:31:33.301 "method": "bdev_nvme_attach_controller" 00:31:33.301 } 00:31:33.301 EOF 00:31:33.301 )") 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:33.301 [2024-12-06 19:29:43.835558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.301 [2024-12-06 19:29:43.835610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:33.301 19:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:33.301 "params": { 00:31:33.301 "name": "Nvme1", 00:31:33.301 "trtype": "tcp", 00:31:33.301 "traddr": "10.0.0.2", 00:31:33.301 "adrfam": "ipv4", 00:31:33.301 "trsvcid": "4420", 00:31:33.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.301 "hdgst": false, 00:31:33.301 "ddgst": false 00:31:33.301 }, 00:31:33.301 "method": "bdev_nvme_attach_controller" 00:31:33.301 }' 00:31:33.301 [2024-12-06 19:29:43.843472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.301 [2024-12-06 19:29:43.843494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.301 [2024-12-06 19:29:43.851471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.301 [2024-12-06 19:29:43.851491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.301 [2024-12-06 19:29:43.859471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.301 [2024-12-06 19:29:43.859490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.301 [2024-12-06 19:29:43.867471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.301 [2024-12-06 19:29:43.867490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.301 [2024-12-06 19:29:43.871653] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:33.301 [2024-12-06 19:29:43.871750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269729 ] 00:31:33.301 [2024-12-06 19:29:43.875479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.302 [2024-12-06 19:29:43.875503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.883468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.883487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.891468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.891487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.899469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.899487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.907470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.907489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.915471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.915490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.923470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.923489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.931470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.931488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.937884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.560 [2024-12-06 19:29:43.939470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.939489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.947524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.947559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.955502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.955526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.963470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.963490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.971470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.971490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.979470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.979491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.987470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.987491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.995494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:43.995514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:43.996105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.560 [2024-12-06 19:29:44.003481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.003503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.011507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.011535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.019510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.019544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.027509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.027545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.035520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.035555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.043515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.043562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.051507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.051542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.059475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.059495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.067499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.067529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.075516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.075551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.083504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.083541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.091471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.091491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.099473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.099493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.107496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.107520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.115478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.115501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.123476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.560 [2024-12-06 19:29:44.123497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.560 [2024-12-06 19:29:44.131487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.561 [2024-12-06 19:29:44.131511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.139493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.139538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.147471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.147491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.155471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.155490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.163470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.163490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.171470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.171489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.179475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.179497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.187476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.187498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.195471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.195491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.203470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.203490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.819 [2024-12-06 19:29:44.211471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.819 [2024-12-06 19:29:44.211490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.219472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.219491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.227474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.227495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.235473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.235494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.243471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.243490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.251471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.251490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.259470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.259489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.267470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.267490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.275475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.275497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.283471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.283490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.291471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.291491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.299471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.299489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.307471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.307489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.315474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.315494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.323844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.323872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.331477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.331500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 Running I/O for 5 seconds... 00:31:33.820 [2024-12-06 19:29:44.346414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.346443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.357369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.357397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.373356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.373397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.820 [2024-12-06 19:29:44.384302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.820 [2024-12-06 19:29:44.384329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.078 [2024-12-06 19:29:44.401454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.401481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.418394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.418420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.430751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.430779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.441979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.442006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.457774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.457802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.469882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.469909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.483851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.483877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.494153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.494179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.506813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.506840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.518218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.518258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.529635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.529689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.546074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.546099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.558721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.558749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.569242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.569277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.581512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.581553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.593310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.593350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.607227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.607268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.617634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.617659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.633389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.633431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.079 [2024-12-06 19:29:44.644175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.079 [2024-12-06 19:29:44.644202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.656826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.656854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.673704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.673732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.684363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.684389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.701136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.701162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.711447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.711473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.723622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.723648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.734820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.734856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.746248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.746273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.761299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.761325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.771775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.771806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.784838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.784865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.800430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.800458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.811038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.811065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.823723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.823750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.835048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.835073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.337 [2024-12-06 19:29:44.846611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.337 [2024-12-06 19:29:44.846637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.338 [2024-12-06 19:29:44.860074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.338 [2024-12-06 19:29:44.860101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.338 [2024-12-06 19:29:44.870984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.338 [2024-12-06 19:29:44.871013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.338 [2024-12-06 19:29:44.883871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.338 [2024-12-06 19:29:44.883897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.338 [2024-12-06 19:29:44.895299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.338 [2024-12-06 19:29:44.895340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.338 [2024-12-06 19:29:44.906930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.338 [2024-12-06 19:29:44.906975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.919121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.919148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.930458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.930497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.944441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.944468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.954260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.954285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.969637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.969685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.983039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.983073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:44.993474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:44.993500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.008200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.008225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.017706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.017733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.034523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.034549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.045012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.045051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.057201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.057226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.072852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.072879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.082927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.082967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.095131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.095158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.106619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.106646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.117936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.117965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.131315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.131344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.141842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.141870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.156580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.156607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.596 [2024-12-06 19:29:45.166615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.596 [2024-12-06 19:29:45.166657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.181739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.181766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.196699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.196729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.207357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.207382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.220445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.220494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.232123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.232149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.243619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.243646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.255299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.255340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.267881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.267908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.279207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.279233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.290954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.290980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.302176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.302201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.316954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.316998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.327715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.327755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.340538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.340565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 10873.00 IOPS, 84.95 MiB/s [2024-12-06T18:29:45.431Z] [2024-12-06 19:29:45.351730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.351759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.363299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.363325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.375200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.375225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.387038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.387077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.398542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.398584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.410190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.410216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.854 [2024-12-06 19:29:45.423133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.854 [2024-12-06 19:29:45.423160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.433582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.433608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.446528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.446560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.459979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.460021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.470500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.470539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.486598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.486626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.497987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.498028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.512605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.512632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.522621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.522648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.537503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.537529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.548236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.548262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.560760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.560801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.577931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.577971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.592749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.592777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.603832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.603859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.616135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.616162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.627582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.627610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.638957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.638985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.650595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.650622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.663765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.663793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.673884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.673912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.112 [2024-12-06 19:29:45.686661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.112 [2024-12-06 19:29:45.686699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.698453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.698480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.710094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.710120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.724634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.724661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.734418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.734445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.749624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.749652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.763031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.763059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.773380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.773407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.786368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.786410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.800882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.800911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.810784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.810812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.823540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.823567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.836101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.836144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.847984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.848010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.858794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.858825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.871135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.871161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.882901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.882929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.895041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.895068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.906849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.906876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.918114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.918141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.929057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.929083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.370 [2024-12-06 19:29:45.946151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.370 [2024-12-06 19:29:45.946179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:45.956760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:45.956787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:45.969252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:45.969278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:45.983692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:45.983734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:45.994193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:45.994235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.009260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.009303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.020100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.020126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.032696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.032731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.044230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.044255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.055389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.055418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.066721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.066748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.078451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.078494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.092022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.092052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.628 [2024-12-06 19:29:46.102067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.628 [2024-12-06 19:29:46.102094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.117580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.117607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.128091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.128118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.140488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.140515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.152137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.152162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.163769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.163797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.175871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.175899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.187282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.187308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.629 [2024-12-06 19:29:46.199164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.629 [2024-12-06 19:29:46.199190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.210390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.210417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.221996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.222038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.236784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.236827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.246617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.246659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.261247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.261274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.271020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.271048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.282864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.282894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.293824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.293851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.307217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.307245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.317055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.317083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.329071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.329097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 10926.00 IOPS, 85.36 MiB/s [2024-12-06T18:29:46.465Z] [2024-12-06 19:29:46.346362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.346390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.361845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.361873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.377047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.377084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.387273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.387299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.399706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.399762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.411121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.411147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.423135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.423160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.434511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.434538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.445537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.445563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.888 [2024-12-06 19:29:46.459421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.888 [2024-12-06 19:29:46.459449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.470325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.470353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.485901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.485929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.496721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.496763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.509509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.509535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.526185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.526212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.541482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.541524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.557460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.557488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.574363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.574406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.586784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.586811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.596843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.596871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.613464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.613490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.630486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.630521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.640441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.640467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.652490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.652532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.663772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.663799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.675229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.675256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.686286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.686311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.700241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.700270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.710827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.710870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.147 [2024-12-06 19:29:46.723305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.147 [2024-12-06 19:29:46.723332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.734698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.734741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.747242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.747268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.759270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.759296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.770159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.770201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.785747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.785775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.800987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.801015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.812034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.812060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.824609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.824648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.836158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.836183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.848233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.848258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.859472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.859506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.870651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.870698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.882364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.882390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.894052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.894077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.908209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.908236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.918743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.918779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.931159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.931184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.942315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.942340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.956872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.956901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.967160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.967188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.406 [2024-12-06 19:29:46.979621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.406 [2024-12-06 19:29:46.979646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:46.990783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:46.990810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.003161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.003187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.014629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.014676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.025636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.025686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.040183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.040209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.050774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.050800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.063219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.063244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.075222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.075249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.087296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.087328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.097994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.098034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.111797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.111826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.121633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.121681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.136105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.136130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.146099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.146124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.161386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.161413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.177328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.177356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.187723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.187754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.200936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.200978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.217159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.217185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.227673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.227699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.665 [2024-12-06 19:29:47.240184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.665 [2024-12-06 19:29:47.240225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.923 [2024-12-06 19:29:47.251145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.923 [2024-12-06 19:29:47.251170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.263490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.263516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.275618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.275658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.286746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.286772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.298110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.298148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.309882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.309908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.322018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.322043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.336766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.336794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.346955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 10933.33 IOPS, 85.42 MiB/s [2024-12-06T18:29:47.501Z] [2024-12-06 19:29:47.346983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.359323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.359349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.370752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.370779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.382033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.382058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.397575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.397602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.414450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.414477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.425355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.425381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.437988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.438014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.452770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.452798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.462983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.463029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.475570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.475611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.486641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.486692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.924 [2024-12-06 19:29:47.497568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.924 [2024-12-06 19:29:47.497595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.511952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.511995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.521980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.522008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.534982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.535009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.546493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.546520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.558179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.558204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.569983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.570024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.583925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.583969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.594511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.594537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.609415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.609440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.620442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.620468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.633438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.633464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.650364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.650391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.664187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.664214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.674601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.674627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.687474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.687500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.698787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.698815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.710353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.710378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.722301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.722342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.738636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.738685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.186 [2024-12-06 19:29:47.749047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.186 [2024-12-06 19:29:47.749072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.763937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.763965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.773531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.773558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.786276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.786311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.799693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.799720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.810233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.810258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.822844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.822871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.833753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.833780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.485 [2024-12-06 19:29:47.847284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.485 [2024-12-06 19:29:47.847325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.857096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.857121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.873574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.873600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.888869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.888897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.899148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.899188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.911645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.911694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.923273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.923298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.935399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.935425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.947088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.947112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.958780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.958820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.970699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.970741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.982347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.982373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:47.994431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:47.994457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:48.010226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:48.010251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:48.020571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:48.020606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.486 [2024-12-06 19:29:48.033014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.486 [2024-12-06 19:29:48.033040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.049358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.049385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.060011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.060036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.072541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.072566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.089526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.089552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.105606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.105633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.121227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.121254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.138105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.138132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.148431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.148471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.161254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.161279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.176841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.176868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.186684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.186713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.202510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.202537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.213586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.213611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.228793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.228820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.239018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.239045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.251497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.251522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.263398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.263424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.275743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.275777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.287726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.767 [2024-12-06 19:29:48.287751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.767 [2024-12-06 19:29:48.299307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.768 [2024-12-06 19:29:48.299332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.768 [2024-12-06 19:29:48.311251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.768 [2024-12-06 19:29:48.311276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.768 [2024-12-06 19:29:48.323015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.768 [2024-12-06 19:29:48.323040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.768 [2024-12-06 19:29:48.334660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.768 [2024-12-06 19:29:48.334695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.346097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.346124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 10920.25 IOPS, 85.31 MiB/s [2024-12-06T18:29:48.603Z] [2024-12-06 19:29:48.357278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.357303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.371804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.371832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.382073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.382099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.398035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.398060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.411139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.411165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.421970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.421996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.436565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.436604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.447137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.447178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.460290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.460316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.471826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.471852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.482457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.482482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.496616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.496642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.507255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.507296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.520467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.520507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.531723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.531763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.543749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.543777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.555764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.555791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.567612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.567640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.579657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.579697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.026 [2024-12-06 19:29:48.592392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.026 [2024-12-06 19:29:48.592417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.604465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.604507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.618723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.618752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.629417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.629443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.641972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.642006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.657789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.657833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.668464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.668504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.684460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.684485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.694618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.284 [2024-12-06 19:29:48.694643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.284 [2024-12-06 19:29:48.709600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.709625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.722957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.722984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.733833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.733860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.749320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.749345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.759592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.759632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.772884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.772925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.789978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.790006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.805183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.805211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.815689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.815726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.828705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.828732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.846174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.846199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.285 [2024-12-06 19:29:48.856787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.285 [2024-12-06 19:29:48.856815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.869715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.869743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.883594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.883621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.893754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.893796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.910188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.910215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.920655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.920703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.938620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.938661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.949239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.949264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.963631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.963679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.974000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.542 [2024-12-06 19:29:48.974026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.542 [2024-12-06 19:29:48.988219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:48.988247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:48.998780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:48.998818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.011505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.011531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.023214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.023240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.035481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.035521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.047133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.047158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.058825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.058853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.070435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.070460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.085671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.085713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.102548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.102589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.543 [2024-12-06 19:29:49.113041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.543 [2024-12-06 19:29:49.113067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.125215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.125241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.139551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.139578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.149693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.149733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.162728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.162755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.174376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.174401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.188120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.188147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.198696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.198725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.213840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.800 [2024-12-06 19:29:49.213868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.800 [2024-12-06 19:29:49.226942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.226986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.237359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.237384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.249906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.249934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.261543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.261569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.275333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.275360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.286430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.286456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.301214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.301239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.317767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.317794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.334290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.334316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.344832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.344859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 10901.00 IOPS, 85.16 MiB/s [2024-12-06T18:29:49.378Z] [2024-12-06 19:29:49.359367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.359397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 00:31:38.801 Latency(us) 00:31:38.801 [2024-12-06T18:29:49.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.801 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:38.801 Nvme1n1 : 5.01 10902.85 85.18 0.00 0.00 11722.61 3021.94 18932.62 00:31:38.801 [2024-12-06T18:29:49.378Z] =================================================================================================================== 00:31:38.801 [2024-12-06T18:29:49.378Z] Total : 10902.85 85.18 0.00 0.00 11722.61 3021.94 18932.62 00:31:38.801 [2024-12-06 19:29:49.363479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.363501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.801 [2024-12-06 19:29:49.371477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.801 [2024-12-06 19:29:49.371500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.379513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.379537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.387518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.387557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.395546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.395597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.403545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.403602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.411538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.411586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.419550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.419596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.427531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.427574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.435547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.435592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.443547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.443591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.451546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.451591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.459551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.459596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.467556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.467603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.475549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.475595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.483541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.483586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.491533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.491576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.499545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.499594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.507490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.507518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.515471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.515490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.523473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.523493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.531471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.531490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.539482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.539504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.547555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.547604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.555537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.555595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.563486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.563509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.571471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.571490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 [2024-12-06 19:29:49.583475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.059 [2024-12-06 19:29:49.583495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1269729) - No such process 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1269729 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.059 delay0 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.059 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:39.060 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.060 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.060 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.060 19:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:39.316 [2024-12-06 19:29:49.703600] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:47.430 Initializing NVMe Controllers 00:31:47.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:47.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:47.430 Initialization complete. Launching workers. 00:31:47.430 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 26388 00:31:47.430 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26467, failed to submit 142 00:31:47.430 success 26388, unsuccessful 79, failed 0 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.430 rmmod nvme_tcp 00:31:47.430 rmmod nvme_fabrics 00:31:47.430 rmmod nvme_keyring 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1268516 ']' 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1268516 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1268516 ']' 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1268516 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1268516 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1268516' 00:31:47.430 killing process with pid 1268516 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1268516 00:31:47.430 19:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1268516 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.430 19:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.808 00:31:48.808 real 0m29.106s 00:31:48.808 user 0m41.469s 00:31:48.808 sys 0m10.111s 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:48.808 ************************************ 00:31:48.808 END TEST nvmf_zcopy 00:31:48.808 ************************************ 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.808 ************************************ 00:31:48.808 START TEST nvmf_nmic 00:31:48.808 ************************************ 00:31:48.808 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:48.808 * Looking for test storage... 00:31:49.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.067 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.067 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.068 --rc genhtml_branch_coverage=1 00:31:49.068 --rc genhtml_function_coverage=1 00:31:49.068 --rc genhtml_legend=1 00:31:49.068 --rc geninfo_all_blocks=1 00:31:49.068 --rc geninfo_unexecuted_blocks=1 00:31:49.068 00:31:49.068 ' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.068 --rc genhtml_branch_coverage=1 00:31:49.068 --rc genhtml_function_coverage=1 00:31:49.068 --rc genhtml_legend=1 00:31:49.068 --rc geninfo_all_blocks=1 00:31:49.068 --rc geninfo_unexecuted_blocks=1 00:31:49.068 00:31:49.068 ' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.068 --rc genhtml_branch_coverage=1 00:31:49.068 --rc genhtml_function_coverage=1 00:31:49.068 --rc genhtml_legend=1 00:31:49.068 --rc geninfo_all_blocks=1 00:31:49.068 --rc geninfo_unexecuted_blocks=1 00:31:49.068 00:31:49.068 ' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.068 --rc genhtml_branch_coverage=1 00:31:49.068 --rc genhtml_function_coverage=1 00:31:49.068 --rc genhtml_legend=1 00:31:49.068 --rc geninfo_all_blocks=1 00:31:49.068 --rc geninfo_unexecuted_blocks=1 00:31:49.068 00:31:49.068 ' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.068 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.069 19:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:50.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:50.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.972 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:50.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:50.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.973 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:31:51.231 00:31:51.231 --- 10.0.0.2 ping statistics --- 00:31:51.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.231 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:31:51.231 00:31:51.231 --- 10.0.0.1 ping statistics --- 00:31:51.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.231 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1273333 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1273333 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1273333 ']' 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.231 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.231 [2024-12-06 19:30:01.765092] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.231 [2024-12-06 19:30:01.766185] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:51.232 [2024-12-06 19:30:01.766260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.491 [2024-12-06 19:30:01.842292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.491 [2024-12-06 19:30:01.903270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.491 [2024-12-06 19:30:01.903329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.491 [2024-12-06 19:30:01.903354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.491 [2024-12-06 19:30:01.903365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.491 [2024-12-06 19:30:01.903375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.491 [2024-12-06 19:30:01.904871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.491 [2024-12-06 19:30:01.905005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.491 [2024-12-06 19:30:01.905056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.491 [2024-12-06 19:30:01.905060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.491 [2024-12-06 19:30:01.997153] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:51.491 [2024-12-06 19:30:01.997371] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:51.491 [2024-12-06 19:30:01.997687] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:51.491 [2024-12-06 19:30:01.998329] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:51.491 [2024-12-06 19:30:01.998521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.491 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.491 [2024-12-06 19:30:02.053735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.750 Malloc0 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.750 [2024-12-06 19:30:02.133916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:51.750 test case1: single bdev can't be used in multiple subsystems 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.750 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.751 [2024-12-06 19:30:02.157655] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:51.751 [2024-12-06 19:30:02.157697] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:51.751 [2024-12-06 19:30:02.157728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:51.751 request: 00:31:51.751 { 00:31:51.751 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:51.751 "namespace": { 00:31:51.751 "bdev_name": "Malloc0", 00:31:51.751 "no_auto_visible": false, 00:31:51.751 "hide_metadata": false 00:31:51.751 }, 00:31:51.751 "method": "nvmf_subsystem_add_ns", 00:31:51.751 "req_id": 1 00:31:51.751 } 00:31:51.751 Got JSON-RPC error response 00:31:51.751 response: 00:31:51.751 { 00:31:51.751 "code": -32602, 00:31:51.751 "message": "Invalid parameters" 00:31:51.751 } 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:51.751 Adding namespace failed - expected result. 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:51.751 test case2: host connect to nvmf target in multiple paths 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.751 [2024-12-06 19:30:02.165778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.751 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:52.009 19:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:54.539 19:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:54.539 [global] 00:31:54.539 thread=1 00:31:54.539 invalidate=1 00:31:54.539 rw=write 00:31:54.539 time_based=1 00:31:54.539 runtime=1 00:31:54.539 ioengine=libaio 00:31:54.539 direct=1 00:31:54.539 bs=4096 00:31:54.539 iodepth=1 00:31:54.539 norandommap=0 00:31:54.539 numjobs=1 00:31:54.539 00:31:54.539 verify_dump=1 00:31:54.539 verify_backlog=512 00:31:54.539 verify_state_save=0 00:31:54.539 do_verify=1 00:31:54.539 verify=crc32c-intel 00:31:54.539 [job0] 00:31:54.539 filename=/dev/nvme0n1 00:31:54.539 Could not set queue depth (nvme0n1) 00:31:54.539 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:54.539 fio-3.35 00:31:54.539 Starting 1 thread 00:31:55.470 00:31:55.470 job0: (groupid=0, jobs=1): err= 0: pid=1273842: Fri Dec 6 19:30:05 2024 00:31:55.470 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:31:55.470 slat (nsec): min=6575, max=33726, avg=17944.39, stdev=7837.39 00:31:55.470 clat (usec): min=40413, max=41029, avg=40951.58, stdev=120.69 00:31:55.470 lat (usec): min=40419, max=41047, avg=40969.52, stdev=122.05 00:31:55.470 clat percentiles (usec): 00:31:55.470 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:55.470 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:55.470 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:55.470 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:55.470 | 99.99th=[41157] 00:31:55.470 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:31:55.470 slat (nsec): min=5695, max=28750, avg=7002.56, stdev=2216.01 00:31:55.470 clat (usec): min=137, max=308, avg=163.58, stdev=33.44 00:31:55.470 lat (usec): min=143, max=327, avg=170.58, stdev=33.65 00:31:55.470 clat percentiles (usec): 00:31:55.470 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 147], 00:31:55.470 | 30.00th=[ 149], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:31:55.470 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 245], 95.00th=[ 247], 00:31:55.470 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 310], 00:31:55.470 | 99.99th=[ 310] 00:31:55.470 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:55.470 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:55.470 lat (usec) : 250=93.27%, 500=2.43% 00:31:55.470 lat (msec) : 50=4.30% 00:31:55.470 cpu : usr=0.10%, sys=0.29%, ctx=535, majf=0, minf=1 00:31:55.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.470 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.470 00:31:55.470 Run status group 0 (all jobs): 00:31:55.470 READ: bw=89.2KiB/s (91.4kB/s), 89.2KiB/s-89.2KiB/s (91.4kB/s-91.4kB/s), io=92.0KiB (94.2kB), run=1031-1031msec 00:31:55.470 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:31:55.470 00:31:55.470 Disk stats (read/write): 00:31:55.470 nvme0n1: ios=69/512, merge=0/0, ticks=806/82, in_queue=888, util=91.48% 00:31:55.470 19:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:55.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.727 rmmod nvme_tcp 00:31:55.727 rmmod nvme_fabrics 00:31:55.727 rmmod nvme_keyring 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1273333 ']' 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1273333 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1273333 ']' 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1273333 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1273333 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1273333' 00:31:55.727 killing process with pid 1273333 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1273333 00:31:55.727 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1273333 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.984 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.515 00:31:58.515 real 0m9.177s 00:31:58.515 user 0m17.105s 00:31:58.515 sys 0m3.222s 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 ************************************ 00:31:58.515 END TEST nvmf_nmic 00:31:58.515 ************************************ 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 ************************************ 00:31:58.515 START TEST nvmf_fio_target 00:31:58.515 ************************************ 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:58.515 * Looking for test storage... 00:31:58.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:58.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.515 --rc genhtml_branch_coverage=1 00:31:58.515 --rc genhtml_function_coverage=1 00:31:58.515 --rc genhtml_legend=1 00:31:58.515 --rc geninfo_all_blocks=1 00:31:58.515 --rc geninfo_unexecuted_blocks=1 00:31:58.515 00:31:58.515 ' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:58.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.515 --rc genhtml_branch_coverage=1 00:31:58.515 --rc genhtml_function_coverage=1 00:31:58.515 --rc genhtml_legend=1 00:31:58.515 --rc geninfo_all_blocks=1 00:31:58.515 --rc geninfo_unexecuted_blocks=1 00:31:58.515 00:31:58.515 ' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:58.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.515 --rc genhtml_branch_coverage=1 00:31:58.515 --rc genhtml_function_coverage=1 00:31:58.515 --rc genhtml_legend=1 00:31:58.515 --rc geninfo_all_blocks=1 00:31:58.515 --rc geninfo_unexecuted_blocks=1 00:31:58.515 00:31:58.515 ' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:58.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.515 --rc genhtml_branch_coverage=1 00:31:58.515 --rc genhtml_function_coverage=1 00:31:58.515 --rc genhtml_legend=1 00:31:58.515 --rc geninfo_all_blocks=1 00:31:58.515 --rc geninfo_unexecuted_blocks=1 00:31:58.515 00:31:58.515 ' 00:31:58.515 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.516 19:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.415 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:00.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:00.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:00.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:00.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.416 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:32:00.417 00:32:00.417 --- 10.0.0.2 ping statistics --- 00:32:00.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.417 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:32:00.417 00:32:00.417 --- 10.0.0.1 ping statistics --- 00:32:00.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.417 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1276426 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1276426 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1276426 ']' 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.417 19:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.674 [2024-12-06 19:30:11.018451] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.674 [2024-12-06 19:30:11.019512] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:00.674 [2024-12-06 19:30:11.019560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.674 [2024-12-06 19:30:11.087133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.674 [2024-12-06 19:30:11.142676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.674 [2024-12-06 19:30:11.142734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.674 [2024-12-06 19:30:11.142757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.674 [2024-12-06 19:30:11.142768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.674 [2024-12-06 19:30:11.142777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.674 [2024-12-06 19:30:11.144309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.674 [2024-12-06 19:30:11.144369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.674 [2024-12-06 19:30:11.144433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.674 [2024-12-06 19:30:11.144436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.674 [2024-12-06 19:30:11.229329] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.674 [2024-12-06 19:30:11.229521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.674 [2024-12-06 19:30:11.229800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.675 [2024-12-06 19:30:11.230362] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.675 [2024-12-06 19:30:11.230560] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.938 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:01.195 [2024-12-06 19:30:11.545165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.195 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.452 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:01.452 19:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.710 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:01.710 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.968 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:01.968 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.226 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:02.226 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:02.484 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:03.051 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:03.051 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:03.051 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:03.051 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:03.620 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:03.620 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:03.620 19:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:03.879 19:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:03.879 19:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:04.446 19:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:04.446 19:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:04.446 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.704 [2024-12-06 19:30:15.261303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.704 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:05.268 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:05.268 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:05.526 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:08.065 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:08.065 [global] 00:32:08.065 thread=1 00:32:08.065 invalidate=1 00:32:08.065 rw=write 00:32:08.065 time_based=1 00:32:08.065 runtime=1 00:32:08.065 ioengine=libaio 00:32:08.065 direct=1 00:32:08.065 bs=4096 00:32:08.065 iodepth=1 00:32:08.065 norandommap=0 00:32:08.065 numjobs=1 00:32:08.065 00:32:08.065 verify_dump=1 00:32:08.066 verify_backlog=512 00:32:08.066 verify_state_save=0 00:32:08.066 do_verify=1 00:32:08.066 verify=crc32c-intel 00:32:08.066 [job0] 00:32:08.066 filename=/dev/nvme0n1 00:32:08.066 [job1] 00:32:08.066 filename=/dev/nvme0n2 00:32:08.066 [job2] 00:32:08.066 filename=/dev/nvme0n3 00:32:08.066 [job3] 00:32:08.066 filename=/dev/nvme0n4 00:32:08.066 Could not set queue depth (nvme0n1) 00:32:08.066 Could not set queue depth (nvme0n2) 00:32:08.066 Could not set queue depth (nvme0n3) 00:32:08.066 Could not set queue depth (nvme0n4) 00:32:08.066 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:08.066 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:08.066 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:08.066 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:08.066 fio-3.35 00:32:08.066 Starting 4 threads 00:32:08.999 00:32:08.999 job0: (groupid=0, jobs=1): err= 0: pid=1277474: Fri Dec 6 19:30:19 2024 00:32:08.999 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:32:08.999 slat (nsec): min=4201, max=36883, avg=7731.15, stdev=4700.38 00:32:08.999 clat (usec): min=193, max=40993, avg=355.89, stdev=1618.78 00:32:08.999 lat (usec): min=209, max=41008, avg=363.62, stdev=1619.08 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:32:08.999 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:32:08.999 | 70.00th=[ 273], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 474], 00:32:08.999 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:32:08.999 | 99.99th=[41157] 00:32:08.999 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:08.999 slat (nsec): min=5830, max=61299, avg=9235.03, stdev=5409.64 00:32:08.999 clat (usec): min=134, max=396, avg=185.95, stdev=42.60 00:32:08.999 lat (usec): min=141, max=415, avg=195.18, stdev=45.44 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:32:08.999 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:32:08.999 | 70.00th=[ 182], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 265], 00:32:08.999 | 99.00th=[ 306], 99.50th=[ 363], 99.90th=[ 383], 99.95th=[ 388], 00:32:08.999 | 99.99th=[ 396] 00:32:08.999 bw ( KiB/s): min= 8192, max= 8192, per=31.37%, avg=8192.00, stdev= 0.00, samples=1 00:32:08.999 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:08.999 lat (usec) : 250=72.12%, 500=26.47%, 750=1.33% 00:32:08.999 lat (msec) : 50=0.08% 00:32:08.999 cpu : usr=1.70%, sys=3.00%, ctx=3677, majf=0, minf=1 00:32:08.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.999 job1: (groupid=0, jobs=1): err= 0: pid=1277485: Fri Dec 6 19:30:19 2024 00:32:08.999 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:32:08.999 slat (nsec): min=8457, max=28066, avg=14405.25, stdev=3380.69 00:32:08.999 clat (usec): min=342, max=42010, avg=37674.28, stdev=11499.83 00:32:08.999 lat (usec): min=356, max=42024, avg=37688.69, stdev=11499.23 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[40633], 20.00th=[41157], 00:32:08.999 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:08.999 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:08.999 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:08.999 | 99.99th=[42206] 00:32:08.999 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:32:08.999 slat (nsec): min=7618, max=54943, avg=16709.29, stdev=7297.47 00:32:08.999 clat (usec): min=179, max=468, avg=229.59, stdev=25.53 00:32:08.999 lat (usec): min=188, max=492, avg=246.30, stdev=24.21 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:32:08.999 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:32:08.999 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:32:08.999 | 99.00th=[ 306], 99.50th=[ 392], 99.90th=[ 469], 99.95th=[ 469], 00:32:08.999 | 99.99th=[ 469] 00:32:08.999 bw ( KiB/s): min= 4096, max= 4096, per=15.68%, avg=4096.00, stdev= 0.00, samples=1 00:32:08.999 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:08.999 lat (usec) : 250=80.22%, 500=15.67% 00:32:08.999 lat (msec) : 50=4.10% 00:32:08.999 cpu : usr=0.29%, sys=1.36%, ctx=536, majf=0, minf=2 00:32:08.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.999 job2: (groupid=0, jobs=1): err= 0: pid=1277486: Fri Dec 6 19:30:19 2024 00:32:08.999 read: IOPS=1607, BW=6430KiB/s (6584kB/s)(6436KiB/1001msec) 00:32:08.999 slat (nsec): min=5945, max=31822, avg=7723.53, stdev=3093.04 00:32:08.999 clat (usec): min=221, max=41030, avg=327.45, stdev=1017.58 00:32:08.999 lat (usec): min=228, max=41037, avg=335.17, stdev=1017.60 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:32:08.999 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:32:08.999 | 70.00th=[ 297], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 445], 00:32:08.999 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 783], 99.95th=[41157], 00:32:08.999 | 99.99th=[41157] 00:32:08.999 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:08.999 slat (nsec): min=6945, max=54347, avg=11862.43, stdev=6357.94 00:32:08.999 clat (usec): min=147, max=536, avg=208.55, stdev=30.26 00:32:08.999 lat (usec): min=159, max=545, avg=220.42, stdev=33.67 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:32:08.999 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:32:08.999 | 70.00th=[ 212], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 265], 00:32:08.999 | 99.00th=[ 297], 99.50th=[ 334], 99.90th=[ 420], 99.95th=[ 441], 00:32:08.999 | 99.99th=[ 537] 00:32:08.999 bw ( KiB/s): min= 8192, max= 8192, per=31.37%, avg=8192.00, stdev= 0.00, samples=1 00:32:08.999 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:08.999 lat (usec) : 250=52.01%, 500=46.57%, 750=1.37%, 1000=0.03% 00:32:08.999 lat (msec) : 50=0.03% 00:32:08.999 cpu : usr=3.60%, sys=3.90%, ctx=3658, majf=0, minf=1 00:32:08.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 issued rwts: total=1609,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.999 job3: (groupid=0, jobs=1): err= 0: pid=1277487: Fri Dec 6 19:30:19 2024 00:32:08.999 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:08.999 slat (nsec): min=4449, max=35253, avg=7793.80, stdev=3777.70 00:32:08.999 clat (usec): min=213, max=653, avg=267.18, stdev=43.78 00:32:08.999 lat (usec): min=219, max=685, avg=274.97, stdev=44.57 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:32:08.999 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:32:08.999 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 375], 00:32:08.999 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 553], 00:32:08.999 | 99.99th=[ 652] 00:32:08.999 write: IOPS=2134, BW=8539KiB/s (8744kB/s)(8548KiB/1001msec); 0 zone resets 00:32:08.999 slat (nsec): min=6258, max=62843, avg=9367.76, stdev=5024.59 00:32:08.999 clat (usec): min=150, max=384, avg=190.52, stdev=27.54 00:32:08.999 lat (usec): min=157, max=397, avg=199.89, stdev=30.15 00:32:08.999 clat percentiles (usec): 00:32:08.999 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:32:08.999 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:32:08.999 | 70.00th=[ 192], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 245], 00:32:08.999 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 379], 99.95th=[ 379], 00:32:08.999 | 99.99th=[ 383] 00:32:08.999 bw ( KiB/s): min= 8192, max= 8192, per=31.37%, avg=8192.00, stdev= 0.00, samples=1 00:32:08.999 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:08.999 lat (usec) : 250=67.17%, 500=32.52%, 750=0.31% 00:32:08.999 cpu : usr=2.30%, sys=3.20%, ctx=4189, majf=0, minf=1 00:32:08.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.999 issued rwts: total=2048,2137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.999 00:32:08.999 Run status group 0 (all jobs): 00:32:08.999 READ: bw=20.1MiB/s (21.1MB/s), 92.9KiB/s-8184KiB/s (95.2kB/s-8380kB/s), io=20.7MiB (21.7MB), run=1001-1033msec 00:32:08.999 WRITE: bw=25.5MiB/s (26.7MB/s), 1983KiB/s-8539KiB/s (2030kB/s-8744kB/s), io=26.3MiB (27.6MB), run=1001-1033msec 00:32:08.999 00:32:08.999 Disk stats (read/write): 00:32:08.999 nvme0n1: ios=1561/1785, merge=0/0, ticks=1325/320, in_queue=1645, util=85.47% 00:32:08.999 nvme0n2: ios=69/512, merge=0/0, ticks=762/112, in_queue=874, util=90.64% 00:32:08.999 nvme0n3: ios=1469/1536, merge=0/0, ticks=1379/308, in_queue=1687, util=93.73% 00:32:08.999 nvme0n4: ios=1609/2048, merge=0/0, ticks=647/370, in_queue=1017, util=94.42% 00:32:08.999 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:08.999 [global] 00:32:08.999 thread=1 00:32:08.999 invalidate=1 00:32:08.999 rw=randwrite 00:32:08.999 time_based=1 00:32:08.999 runtime=1 00:32:08.999 ioengine=libaio 00:32:08.999 direct=1 00:32:08.999 bs=4096 00:32:08.999 iodepth=1 00:32:08.999 norandommap=0 00:32:08.999 numjobs=1 00:32:08.999 00:32:08.999 verify_dump=1 00:32:08.999 verify_backlog=512 00:32:08.999 verify_state_save=0 00:32:08.999 do_verify=1 00:32:08.999 verify=crc32c-intel 00:32:08.999 [job0] 00:32:08.999 filename=/dev/nvme0n1 00:32:08.999 [job1] 00:32:08.999 filename=/dev/nvme0n2 00:32:08.999 [job2] 00:32:08.999 filename=/dev/nvme0n3 00:32:08.999 [job3] 00:32:08.999 filename=/dev/nvme0n4 00:32:09.257 Could not set queue depth (nvme0n1) 00:32:09.257 Could not set queue depth (nvme0n2) 00:32:09.257 Could not set queue depth (nvme0n3) 00:32:09.257 Could not set queue depth (nvme0n4) 00:32:09.257 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.257 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.257 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.257 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.257 fio-3.35 00:32:09.257 Starting 4 threads 00:32:10.666 00:32:10.666 job0: (groupid=0, jobs=1): err= 0: pid=1277719: Fri Dec 6 19:30:20 2024 00:32:10.666 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:32:10.666 slat (nsec): min=14545, max=33880, avg=21751.86, stdev=8263.55 00:32:10.666 clat (usec): min=282, max=43986, avg=40019.81, stdev=8894.27 00:32:10.666 lat (usec): min=304, max=44006, avg=40041.56, stdev=8894.35 00:32:10.666 clat percentiles (usec): 00:32:10.666 | 1.00th=[ 281], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:32:10.666 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:10.666 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:10.666 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:32:10.666 | 99.99th=[43779] 00:32:10.666 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:32:10.666 slat (nsec): min=6647, max=71304, avg=19876.05, stdev=8708.80 00:32:10.666 clat (usec): min=189, max=1821, avg=285.20, stdev=105.48 00:32:10.666 lat (usec): min=206, max=1836, avg=305.08, stdev=107.06 00:32:10.666 clat percentiles (usec): 00:32:10.666 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 231], 00:32:10.666 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:32:10.666 | 70.00th=[ 293], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 429], 00:32:10.666 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 1827], 99.95th=[ 1827], 00:32:10.666 | 99.99th=[ 1827] 00:32:10.666 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:32:10.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:10.666 lat (usec) : 250=44.76%, 500=50.19%, 750=0.75% 00:32:10.666 lat (msec) : 2=0.37%, 50=3.93% 00:32:10.666 cpu : usr=0.48%, sys=1.44%, ctx=535, majf=0, minf=1 00:32:10.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.666 job1: (groupid=0, jobs=1): err= 0: pid=1277720: Fri Dec 6 19:30:20 2024 00:32:10.666 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4044KiB/1006msec) 00:32:10.666 slat (nsec): min=6934, max=70516, avg=16452.02, stdev=4737.87 00:32:10.666 clat (usec): min=217, max=41129, avg=732.86, stdev=4221.90 00:32:10.666 lat (usec): min=225, max=41160, avg=749.31, stdev=4221.85 00:32:10.666 clat percentiles (usec): 00:32:10.666 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 262], 20.00th=[ 273], 00:32:10.666 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:32:10.666 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 343], 00:32:10.666 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:10.666 | 99.99th=[41157] 00:32:10.666 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:32:10.666 slat (nsec): min=7959, max=51111, avg=18530.72, stdev=5823.98 00:32:10.666 clat (usec): min=152, max=2043, avg=212.37, stdev=85.35 00:32:10.666 lat (usec): min=164, max=2063, avg=230.90, stdev=85.73 00:32:10.666 clat percentiles (usec): 00:32:10.666 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 194], 00:32:10.666 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:32:10.666 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 260], 00:32:10.666 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 2024], 99.95th=[ 2040], 00:32:10.666 | 99.99th=[ 2040] 00:32:10.666 bw ( KiB/s): min= 8192, max= 8192, per=52.05%, avg=8192.00, stdev= 0.00, samples=1 00:32:10.666 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:10.666 lat (usec) : 250=51.45%, 500=47.03%, 750=0.84% 00:32:10.667 lat (msec) : 2=0.05%, 4=0.10%, 50=0.54% 00:32:10.667 cpu : usr=2.29%, sys=5.37%, ctx=2036, majf=0, minf=1 00:32:10.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 issued rwts: total=1011,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.667 job2: (groupid=0, jobs=1): err= 0: pid=1277721: Fri Dec 6 19:30:20 2024 00:32:10.667 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:32:10.667 slat (nsec): min=13325, max=42698, avg=23944.04, stdev=9713.51 00:32:10.667 clat (usec): min=533, max=42010, avg=38014.72, stdev=11829.81 00:32:10.667 lat (usec): min=566, max=42028, avg=38038.66, stdev=11830.16 00:32:10.667 clat percentiles (usec): 00:32:10.667 | 1.00th=[ 537], 5.00th=[ 570], 10.00th=[41157], 20.00th=[41157], 00:32:10.667 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:32:10.667 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:10.667 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:10.667 | 99.99th=[42206] 00:32:10.667 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:32:10.667 slat (nsec): min=6576, max=63174, avg=18885.53, stdev=9899.77 00:32:10.667 clat (usec): min=174, max=2121, avg=264.14, stdev=131.69 00:32:10.667 lat (usec): min=188, max=2151, avg=283.03, stdev=133.93 00:32:10.667 clat percentiles (usec): 00:32:10.667 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:32:10.667 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 235], 00:32:10.667 | 70.00th=[ 255], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 416], 00:32:10.667 | 99.00th=[ 478], 99.50th=[ 519], 99.90th=[ 2114], 99.95th=[ 2114], 00:32:10.667 | 99.99th=[ 2114] 00:32:10.667 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:32:10.667 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:10.667 lat (usec) : 250=65.61%, 500=29.53%, 750=0.56% 00:32:10.667 lat (msec) : 2=0.19%, 4=0.19%, 50=3.93% 00:32:10.667 cpu : usr=0.88%, sys=0.49%, ctx=536, majf=0, minf=1 00:32:10.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.667 job3: (groupid=0, jobs=1): err= 0: pid=1277722: Fri Dec 6 19:30:20 2024 00:32:10.667 read: IOPS=1556, BW=6227KiB/s (6376kB/s)(6264KiB/1006msec) 00:32:10.667 slat (nsec): min=5691, max=69932, avg=13863.58, stdev=5112.80 00:32:10.667 clat (usec): min=199, max=41014, avg=348.50, stdev=1771.62 00:32:10.667 lat (usec): min=209, max=41031, avg=362.36, stdev=1771.85 00:32:10.667 clat percentiles (usec): 00:32:10.667 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:32:10.667 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:32:10.667 | 70.00th=[ 251], 80.00th=[ 277], 90.00th=[ 449], 95.00th=[ 486], 00:32:10.667 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[40633], 99.95th=[41157], 00:32:10.667 | 99.99th=[41157] 00:32:10.667 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:32:10.667 slat (nsec): min=7488, max=78238, avg=13853.10, stdev=6872.48 00:32:10.667 clat (usec): min=138, max=1631, avg=192.64, stdev=68.82 00:32:10.667 lat (usec): min=147, max=1646, avg=206.49, stdev=72.52 00:32:10.667 clat percentiles (usec): 00:32:10.667 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 151], 20.00th=[ 155], 00:32:10.667 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:32:10.667 | 70.00th=[ 186], 80.00th=[ 223], 90.00th=[ 260], 95.00th=[ 318], 00:32:10.667 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 523], 99.95th=[ 873], 00:32:10.667 | 99.99th=[ 1631] 00:32:10.667 bw ( KiB/s): min= 8192, max= 8192, per=52.05%, avg=8192.00, stdev= 0.00, samples=2 00:32:10.667 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:32:10.667 lat (usec) : 250=79.94%, 500=18.73%, 750=1.19%, 1000=0.03% 00:32:10.667 lat (msec) : 2=0.03%, 50=0.08% 00:32:10.667 cpu : usr=2.79%, sys=5.47%, ctx=3615, majf=0, minf=2 00:32:10.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.667 issued rwts: total=1566,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.667 00:32:10.667 Run status group 0 (all jobs): 00:32:10.667 READ: bw=9.84MiB/s (10.3MB/s), 84.5KiB/s-6227KiB/s (86.6kB/s-6376kB/s), io=10.2MiB (10.7MB), run=1006-1041msec 00:32:10.667 WRITE: bw=15.4MiB/s (16.1MB/s), 1967KiB/s-8143KiB/s (2015kB/s-8339kB/s), io=16.0MiB (16.8MB), run=1006-1041msec 00:32:10.667 00:32:10.667 Disk stats (read/write): 00:32:10.667 nvme0n1: ios=67/512, merge=0/0, ticks=700/141, in_queue=841, util=87.47% 00:32:10.667 nvme0n2: ios=1057/1024, merge=0/0, ticks=633/212, in_queue=845, util=91.38% 00:32:10.667 nvme0n3: ios=42/512, merge=0/0, ticks=1612/136, in_queue=1748, util=93.67% 00:32:10.667 nvme0n4: ios=1561/1935, merge=0/0, ticks=1307/368, in_queue=1675, util=94.46% 00:32:10.667 19:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:10.667 [global] 00:32:10.667 thread=1 00:32:10.667 invalidate=1 00:32:10.667 rw=write 00:32:10.667 time_based=1 00:32:10.667 runtime=1 00:32:10.667 ioengine=libaio 00:32:10.667 direct=1 00:32:10.667 bs=4096 00:32:10.667 iodepth=128 00:32:10.667 norandommap=0 00:32:10.667 numjobs=1 00:32:10.667 00:32:10.667 verify_dump=1 00:32:10.667 verify_backlog=512 00:32:10.667 verify_state_save=0 00:32:10.667 do_verify=1 00:32:10.667 verify=crc32c-intel 00:32:10.667 [job0] 00:32:10.667 filename=/dev/nvme0n1 00:32:10.667 [job1] 00:32:10.667 filename=/dev/nvme0n2 00:32:10.667 [job2] 00:32:10.667 filename=/dev/nvme0n3 00:32:10.667 [job3] 00:32:10.667 filename=/dev/nvme0n4 00:32:10.667 Could not set queue depth (nvme0n1) 00:32:10.667 Could not set queue depth (nvme0n2) 00:32:10.667 Could not set queue depth (nvme0n3) 00:32:10.667 Could not set queue depth (nvme0n4) 00:32:10.952 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.952 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.952 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.952 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.952 fio-3.35 00:32:10.952 Starting 4 threads 00:32:11.885 00:32:11.885 job0: (groupid=0, jobs=1): err= 0: pid=1277948: Fri Dec 6 19:30:22 2024 00:32:11.885 read: IOPS=3312, BW=12.9MiB/s (13.6MB/s)(13.1MiB/1009msec) 00:32:11.885 slat (usec): min=2, max=16449, avg=135.83, stdev=904.72 00:32:11.885 clat (usec): min=856, max=55879, avg=17265.28, stdev=8843.77 00:32:11.885 lat (usec): min=4104, max=55882, avg=17401.11, stdev=8899.73 00:32:11.885 clat percentiles (usec): 00:32:11.885 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:32:11.885 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13435], 60.00th=[15401], 00:32:11.885 | 70.00th=[18744], 80.00th=[21627], 90.00th=[28181], 95.00th=[36963], 00:32:11.885 | 99.00th=[50594], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:32:11.885 | 99.99th=[55837] 00:32:11.885 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:32:11.885 slat (usec): min=3, max=33819, avg=146.39, stdev=1142.32 00:32:11.885 clat (usec): min=5047, max=89889, avg=19487.13, stdev=14404.08 00:32:11.885 lat (usec): min=5052, max=97351, avg=19633.52, stdev=14522.06 00:32:11.885 clat percentiles (usec): 00:32:11.885 | 1.00th=[ 5735], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[10945], 00:32:11.885 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12518], 60.00th=[13960], 00:32:11.885 | 70.00th=[19530], 80.00th=[25035], 90.00th=[35914], 95.00th=[59507], 00:32:11.885 | 99.00th=[67634], 99.50th=[74974], 99.90th=[89654], 99.95th=[89654], 00:32:11.885 | 99.99th=[89654] 00:32:11.885 bw ( KiB/s): min=11783, max=16912, per=24.66%, avg=14347.50, stdev=3626.75, samples=2 00:32:11.885 iops : min= 2945, max= 4228, avg=3586.50, stdev=907.22, samples=2 00:32:11.885 lat (usec) : 1000=0.01% 00:32:11.885 lat (msec) : 10=6.21%, 20=66.99%, 50=22.32%, 100=4.46% 00:32:11.885 cpu : usr=3.27%, sys=4.96%, ctx=244, majf=0, minf=1 00:32:11.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:11.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.885 issued rwts: total=3342,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.885 job1: (groupid=0, jobs=1): err= 0: pid=1277949: Fri Dec 6 19:30:22 2024 00:32:11.885 read: IOPS=4943, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1009msec) 00:32:11.885 slat (usec): min=2, max=10618, avg=92.06, stdev=684.01 00:32:11.885 clat (usec): min=6433, max=40057, avg=11836.20, stdev=4035.21 00:32:11.885 lat (usec): min=6457, max=40074, avg=11928.26, stdev=4092.35 00:32:11.885 clat percentiles (usec): 00:32:11.885 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9241], 00:32:11.885 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:32:11.885 | 70.00th=[12125], 80.00th=[13566], 90.00th=[16450], 95.00th=[18220], 00:32:11.885 | 99.00th=[30016], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:32:11.885 | 99.99th=[40109] 00:32:11.885 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:32:11.885 slat (usec): min=3, max=9423, avg=95.22, stdev=578.00 00:32:11.885 clat (usec): min=1026, max=40065, avg=13436.68, stdev=7336.60 00:32:11.885 lat (usec): min=1034, max=40084, avg=13531.89, stdev=7388.81 00:32:11.885 clat percentiles (usec): 00:32:11.885 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 7635], 20.00th=[ 8586], 00:32:11.885 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10945], 60.00th=[11731], 00:32:11.885 | 70.00th=[12518], 80.00th=[16319], 90.00th=[28967], 95.00th=[32113], 00:32:11.885 | 99.00th=[33162], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:32:11.885 | 99.99th=[40109] 00:32:11.885 bw ( KiB/s): min=16664, max=24296, per=35.20%, avg=20480.00, stdev=5396.64, samples=2 00:32:11.885 iops : min= 4166, max= 6074, avg=5120.00, stdev=1349.16, samples=2 00:32:11.885 lat (msec) : 2=0.17%, 4=0.38%, 10=34.05%, 20=56.36%, 50=9.04% 00:32:11.885 cpu : usr=6.35%, sys=9.33%, ctx=386, majf=0, minf=1 00:32:11.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:11.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.885 issued rwts: total=4988,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.885 job2: (groupid=0, jobs=1): err= 0: pid=1277951: Fri Dec 6 19:30:22 2024 00:32:11.885 read: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1005msec) 00:32:11.885 slat (usec): min=3, max=16817, avg=133.96, stdev=758.79 00:32:11.885 clat (usec): min=4306, max=54779, avg=15374.77, stdev=5149.21 00:32:11.885 lat (usec): min=4320, max=54795, avg=15508.74, stdev=5235.40 00:32:11.885 clat percentiles (usec): 00:32:11.886 | 1.00th=[ 4948], 5.00th=[10683], 10.00th=[11600], 20.00th=[12911], 00:32:11.886 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14353], 60.00th=[14746], 00:32:11.886 | 70.00th=[15664], 80.00th=[16188], 90.00th=[19268], 95.00th=[25297], 00:32:11.886 | 99.00th=[38011], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:32:11.886 | 99.99th=[54789] 00:32:11.886 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:32:11.886 slat (usec): min=4, max=20488, avg=182.61, stdev=915.67 00:32:11.886 clat (usec): min=8005, max=75613, avg=26193.24, stdev=15059.81 00:32:11.886 lat (usec): min=8012, max=75629, avg=26375.85, stdev=15137.23 00:32:11.886 clat percentiles (usec): 00:32:11.886 | 1.00th=[ 9372], 5.00th=[12256], 10.00th=[12911], 20.00th=[13042], 00:32:11.886 | 30.00th=[13304], 40.00th=[19792], 50.00th=[22414], 60.00th=[24773], 00:32:11.886 | 70.00th=[30540], 80.00th=[37487], 90.00th=[49546], 95.00th=[59507], 00:32:11.886 | 99.00th=[70779], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:32:11.886 | 99.99th=[76022] 00:32:11.886 bw ( KiB/s): min= 9416, max=15190, per=21.15%, avg=12303.00, stdev=4082.83, samples=2 00:32:11.886 iops : min= 2354, max= 3797, avg=3075.50, stdev=1020.36, samples=2 00:32:11.886 lat (msec) : 10=2.16%, 20=63.13%, 50=29.68%, 100=5.03% 00:32:11.886 cpu : usr=5.08%, sys=5.88%, ctx=419, majf=0, minf=2 00:32:11.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:11.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.886 issued rwts: total=3034,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.886 job3: (groupid=0, jobs=1): err= 0: pid=1277952: Fri Dec 6 19:30:22 2024 00:32:11.886 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:32:11.886 slat (usec): min=3, max=16944, avg=144.74, stdev=992.16 00:32:11.886 clat (usec): min=6162, max=39278, avg=17217.84, stdev=6120.70 00:32:11.886 lat (usec): min=6180, max=39284, avg=17362.57, stdev=6189.85 00:32:11.886 clat percentiles (usec): 00:32:11.886 | 1.00th=[ 6915], 5.00th=[10028], 10.00th=[11600], 20.00th=[11731], 00:32:11.886 | 30.00th=[13042], 40.00th=[14353], 50.00th=[16712], 60.00th=[17957], 00:32:11.886 | 70.00th=[19268], 80.00th=[20579], 90.00th=[26346], 95.00th=[30278], 00:32:11.886 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:32:11.886 | 99.99th=[39060] 00:32:11.886 write: IOPS=2920, BW=11.4MiB/s (12.0MB/s)(11.6MiB/1013msec); 0 zone resets 00:32:11.886 slat (usec): min=4, max=21607, avg=202.52, stdev=911.22 00:32:11.886 clat (usec): min=4262, max=64947, avg=28499.58, stdev=12388.49 00:32:11.886 lat (usec): min=4279, max=64967, avg=28702.10, stdev=12463.83 00:32:11.886 clat percentiles (usec): 00:32:11.886 | 1.00th=[ 5342], 5.00th=[10028], 10.00th=[16581], 20.00th=[18744], 00:32:11.886 | 30.00th=[22152], 40.00th=[22938], 50.00th=[24511], 60.00th=[27395], 00:32:11.886 | 70.00th=[34341], 80.00th=[39584], 90.00th=[46924], 95.00th=[52691], 00:32:11.886 | 99.00th=[61080], 99.50th=[63177], 99.90th=[64750], 99.95th=[64750], 00:32:11.886 | 99.99th=[64750] 00:32:11.886 bw ( KiB/s): min=10360, max=12288, per=19.46%, avg=11324.00, stdev=1363.30, samples=2 00:32:11.886 iops : min= 2590, max= 3072, avg=2831.00, stdev=340.83, samples=2 00:32:11.886 lat (msec) : 10=4.53%, 20=43.31%, 50=48.62%, 100=3.53% 00:32:11.886 cpu : usr=3.75%, sys=6.52%, ctx=343, majf=0, minf=1 00:32:11.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:11.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.886 issued rwts: total=2560,2958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.886 00:32:11.886 Run status group 0 (all jobs): 00:32:11.886 READ: bw=53.7MiB/s (56.3MB/s), 9.87MiB/s-19.3MiB/s (10.4MB/s-20.2MB/s), io=54.4MiB (57.0MB), run=1005-1013msec 00:32:11.886 WRITE: bw=56.8MiB/s (59.6MB/s), 11.4MiB/s-19.8MiB/s (12.0MB/s-20.8MB/s), io=57.6MiB (60.3MB), run=1005-1013msec 00:32:11.886 00:32:11.886 Disk stats (read/write): 00:32:11.886 nvme0n1: ios=2609/3060, merge=0/0, ticks=18133/24788, in_queue=42921, util=86.07% 00:32:11.886 nvme0n2: ios=4655/4615, merge=0/0, ticks=51779/52335, in_queue=104114, util=89.95% 00:32:11.886 nvme0n3: ios=2617/2735, merge=0/0, ticks=18061/29833, in_queue=47894, util=94.70% 00:32:11.886 nvme0n4: ios=2105/2455, merge=0/0, ticks=35456/68584, in_queue=104040, util=94.24% 00:32:11.886 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:11.886 [global] 00:32:11.886 thread=1 00:32:11.886 invalidate=1 00:32:11.886 rw=randwrite 00:32:11.886 time_based=1 00:32:11.886 runtime=1 00:32:11.886 ioengine=libaio 00:32:11.886 direct=1 00:32:11.886 bs=4096 00:32:11.886 iodepth=128 00:32:11.886 norandommap=0 00:32:11.886 numjobs=1 00:32:11.886 00:32:11.886 verify_dump=1 00:32:11.886 verify_backlog=512 00:32:11.886 verify_state_save=0 00:32:11.886 do_verify=1 00:32:11.886 verify=crc32c-intel 00:32:11.886 [job0] 00:32:11.886 filename=/dev/nvme0n1 00:32:11.886 [job1] 00:32:11.886 filename=/dev/nvme0n2 00:32:11.886 [job2] 00:32:11.886 filename=/dev/nvme0n3 00:32:11.886 [job3] 00:32:11.886 filename=/dev/nvme0n4 00:32:12.144 Could not set queue depth (nvme0n1) 00:32:12.144 Could not set queue depth (nvme0n2) 00:32:12.144 Could not set queue depth (nvme0n3) 00:32:12.144 Could not set queue depth (nvme0n4) 00:32:12.144 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.144 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.144 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.144 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.144 fio-3.35 00:32:12.144 Starting 4 threads 00:32:13.519 00:32:13.519 job0: (groupid=0, jobs=1): err= 0: pid=1278174: Fri Dec 6 19:30:23 2024 00:32:13.519 read: IOPS=3625, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1010msec) 00:32:13.519 slat (nsec): min=1866, max=17915k, avg=116793.55, stdev=941474.03 00:32:13.519 clat (usec): min=4830, max=48583, avg=14883.81, stdev=5785.58 00:32:13.519 lat (usec): min=4833, max=48587, avg=15000.60, stdev=5858.62 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[11469], 00:32:13.519 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13304], 60.00th=[14353], 00:32:13.519 | 70.00th=[15008], 80.00th=[17957], 90.00th=[21627], 95.00th=[27919], 00:32:13.519 | 99.00th=[38011], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:32:13.519 | 99.99th=[48497] 00:32:13.519 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:32:13.519 slat (usec): min=2, max=13690, avg=124.03, stdev=789.59 00:32:13.519 clat (usec): min=2775, max=48565, avg=17993.75, stdev=11826.72 00:32:13.519 lat (usec): min=2779, max=48570, avg=18117.78, stdev=11915.46 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 6390], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9765], 00:32:13.519 | 30.00th=[10421], 40.00th=[11731], 50.00th=[12518], 60.00th=[14091], 00:32:13.519 | 70.00th=[15926], 80.00th=[31851], 90.00th=[39584], 95.00th=[43254], 00:32:13.519 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:32:13.519 | 99.99th=[48497] 00:32:13.519 bw ( KiB/s): min=11896, max=20480, per=28.82%, avg=16188.00, stdev=6069.80, samples=2 00:32:13.519 iops : min= 2974, max= 5120, avg=4047.00, stdev=1517.45, samples=2 00:32:13.519 lat (msec) : 4=0.26%, 10=18.91%, 20=61.82%, 50=19.01% 00:32:13.519 cpu : usr=1.78%, sys=2.97%, ctx=308, majf=0, minf=1 00:32:13.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:13.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.519 issued rwts: total=3662,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.519 job1: (groupid=0, jobs=1): err= 0: pid=1278180: Fri Dec 6 19:30:23 2024 00:32:13.519 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:32:13.519 slat (nsec): min=1904, max=8277.2k, avg=115253.77, stdev=678212.61 00:32:13.519 clat (usec): min=7468, max=34063, avg=15076.54, stdev=5628.40 00:32:13.519 lat (usec): min=7471, max=34066, avg=15191.79, stdev=5686.42 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 8160], 5.00th=[10028], 10.00th=[10683], 20.00th=[10945], 00:32:13.519 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13435], 60.00th=[13698], 00:32:13.519 | 70.00th=[15008], 80.00th=[20841], 90.00th=[24511], 95.00th=[26346], 00:32:13.519 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[33424], 00:32:13.519 | 99.99th=[33817] 00:32:13.519 write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec); 0 zone resets 00:32:13.519 slat (usec): min=2, max=12741, avg=154.28, stdev=787.26 00:32:13.519 clat (usec): min=3708, max=89223, avg=18894.01, stdev=13417.41 00:32:13.519 lat (usec): min=3713, max=89229, avg=19048.29, stdev=13523.56 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 7177], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:32:13.519 | 30.00th=[11338], 40.00th=[11994], 50.00th=[13566], 60.00th=[14222], 00:32:13.519 | 70.00th=[20055], 80.00th=[22414], 90.00th=[36439], 95.00th=[50070], 00:32:13.519 | 99.00th=[77071], 99.50th=[82314], 99.90th=[89654], 99.95th=[89654], 00:32:13.519 | 99.99th=[89654] 00:32:13.519 bw ( KiB/s): min=12288, max=16384, per=25.52%, avg=14336.00, stdev=2896.31, samples=2 00:32:13.519 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:32:13.519 lat (msec) : 4=0.21%, 10=4.58%, 20=69.76%, 50=22.97%, 100=2.48% 00:32:13.519 cpu : usr=0.99%, sys=3.58%, ctx=425, majf=0, minf=1 00:32:13.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:13.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.519 issued rwts: total=3584,3711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.519 job2: (groupid=0, jobs=1): err= 0: pid=1278182: Fri Dec 6 19:30:23 2024 00:32:13.519 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:32:13.519 slat (usec): min=2, max=14974, avg=154.00, stdev=994.32 00:32:13.519 clat (usec): min=10520, max=63390, avg=20515.53, stdev=7724.59 00:32:13.519 lat (usec): min=10524, max=63393, avg=20669.54, stdev=7793.12 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[11994], 5.00th=[13829], 10.00th=[14484], 20.00th=[15008], 00:32:13.519 | 30.00th=[15926], 40.00th=[16450], 50.00th=[17433], 60.00th=[20055], 00:32:13.519 | 70.00th=[22414], 80.00th=[24249], 90.00th=[29492], 95.00th=[34341], 00:32:13.519 | 99.00th=[52167], 99.50th=[56886], 99.90th=[63177], 99.95th=[63177], 00:32:13.519 | 99.99th=[63177] 00:32:13.519 write: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1003msec); 0 zone resets 00:32:13.519 slat (usec): min=3, max=24309, avg=197.19, stdev=1504.13 00:32:13.519 clat (usec): min=1127, max=70794, avg=24844.96, stdev=14452.53 00:32:13.519 lat (usec): min=1150, max=70845, avg=25042.15, stdev=14606.70 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 5800], 5.00th=[10421], 10.00th=[11600], 20.00th=[13829], 00:32:13.519 | 30.00th=[15008], 40.00th=[16057], 50.00th=[16712], 60.00th=[20841], 00:32:13.519 | 70.00th=[34341], 80.00th=[38536], 90.00th=[48497], 95.00th=[53740], 00:32:13.519 | 99.00th=[58459], 99.50th=[58459], 99.90th=[66323], 99.95th=[70779], 00:32:13.519 | 99.99th=[70779] 00:32:13.519 bw ( KiB/s): min= 9776, max=13264, per=20.51%, avg=11520.00, stdev=2466.39, samples=2 00:32:13.519 iops : min= 2444, max= 3316, avg=2880.00, stdev=616.60, samples=2 00:32:13.519 lat (msec) : 2=0.05%, 10=2.03%, 20=57.14%, 50=35.82%, 100=4.96% 00:32:13.519 cpu : usr=1.30%, sys=2.69%, ctx=161, majf=0, minf=2 00:32:13.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:13.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.519 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.519 job3: (groupid=0, jobs=1): err= 0: pid=1278183: Fri Dec 6 19:30:23 2024 00:32:13.519 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:32:13.519 slat (usec): min=3, max=12841, avg=130.06, stdev=813.11 00:32:13.519 clat (usec): min=9501, max=32140, avg=16124.94, stdev=3341.99 00:32:13.519 lat (usec): min=9510, max=32166, avg=16255.01, stdev=3418.37 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[11076], 5.00th=[12256], 10.00th=[13042], 20.00th=[13435], 00:32:13.519 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15139], 60.00th=[15795], 00:32:13.519 | 70.00th=[17171], 80.00th=[19268], 90.00th=[20841], 95.00th=[22414], 00:32:13.519 | 99.00th=[26346], 99.50th=[28181], 99.90th=[29230], 99.95th=[29492], 00:32:13.519 | 99.99th=[32113] 00:32:13.519 write: IOPS=3347, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1007msec); 0 zone resets 00:32:13.519 slat (usec): min=3, max=19185, avg=173.36, stdev=787.70 00:32:13.519 clat (usec): min=625, max=62035, avg=23109.21, stdev=10728.42 00:32:13.519 lat (usec): min=7001, max=62060, avg=23282.57, stdev=10798.71 00:32:13.519 clat percentiles (usec): 00:32:13.519 | 1.00th=[ 8225], 5.00th=[11076], 10.00th=[13304], 20.00th=[14746], 00:32:13.519 | 30.00th=[15533], 40.00th=[16712], 50.00th=[17695], 60.00th=[22414], 00:32:13.519 | 70.00th=[28181], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:32:13.519 | 99.00th=[58983], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:32:13.519 | 99.99th=[62129] 00:32:13.519 bw ( KiB/s): min=12344, max=13600, per=23.09%, avg=12972.00, stdev=888.13, samples=2 00:32:13.519 iops : min= 3086, max= 3400, avg=3243.00, stdev=222.03, samples=2 00:32:13.519 lat (usec) : 750=0.02% 00:32:13.519 lat (msec) : 10=2.08%, 20=65.47%, 50=30.89%, 100=1.55% 00:32:13.519 cpu : usr=3.48%, sys=3.68%, ctx=386, majf=0, minf=1 00:32:13.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:13.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.519 issued rwts: total=3072,3371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.519 00:32:13.519 Run status group 0 (all jobs): 00:32:13.519 READ: bw=49.8MiB/s (52.2MB/s), 9.97MiB/s-14.2MiB/s (10.5MB/s-14.9MB/s), io=50.3MiB (52.7MB), run=1003-1010msec 00:32:13.519 WRITE: bw=54.9MiB/s (57.5MB/s), 11.7MiB/s-15.8MiB/s (12.3MB/s-16.6MB/s), io=55.4MiB (58.1MB), run=1003-1010msec 00:32:13.519 00:32:13.519 Disk stats (read/write): 00:32:13.519 nvme0n1: ios=3516/3584, merge=0/0, ticks=49907/55184, in_queue=105091, util=87.47% 00:32:13.519 nvme0n2: ios=3116/3241, merge=0/0, ticks=14153/21453, in_queue=35606, util=89.85% 00:32:13.519 nvme0n3: ios=2105/2257, merge=0/0, ticks=21220/26968, in_queue=48188, util=90.94% 00:32:13.519 nvme0n4: ios=2613/2879, merge=0/0, ticks=19910/32384, in_queue=52294, util=97.06% 00:32:13.519 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:13.519 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1278348 00:32:13.519 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:13.519 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:13.519 [global] 00:32:13.519 thread=1 00:32:13.519 invalidate=1 00:32:13.519 rw=read 00:32:13.519 time_based=1 00:32:13.519 runtime=10 00:32:13.519 ioengine=libaio 00:32:13.520 direct=1 00:32:13.520 bs=4096 00:32:13.520 iodepth=1 00:32:13.520 norandommap=1 00:32:13.520 numjobs=1 00:32:13.520 00:32:13.520 [job0] 00:32:13.520 filename=/dev/nvme0n1 00:32:13.520 [job1] 00:32:13.520 filename=/dev/nvme0n2 00:32:13.520 [job2] 00:32:13.520 filename=/dev/nvme0n3 00:32:13.520 [job3] 00:32:13.520 filename=/dev/nvme0n4 00:32:13.520 Could not set queue depth (nvme0n1) 00:32:13.520 Could not set queue depth (nvme0n2) 00:32:13.520 Could not set queue depth (nvme0n3) 00:32:13.520 Could not set queue depth (nvme0n4) 00:32:13.777 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.777 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.777 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.777 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.777 fio-3.35 00:32:13.777 Starting 4 threads 00:32:17.053 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:17.053 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:17.053 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=626688, buflen=4096 00:32:17.053 fio: pid=1278535, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.053 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.053 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:17.053 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4341760, buflen=4096 00:32:17.053 fio: pid=1278534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.310 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.310 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:17.310 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1413120, buflen=4096 00:32:17.310 fio: pid=1278532, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.567 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.567 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:17.567 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=14442496, buflen=4096 00:32:17.568 fio: pid=1278533, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:32:17.568 00:32:17.568 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278532: Fri Dec 6 19:30:28 2024 00:32:17.568 read: IOPS=98, BW=394KiB/s (404kB/s)(1380KiB/3501msec) 00:32:17.568 slat (usec): min=4, max=12834, avg=70.08, stdev=780.10 00:32:17.568 clat (usec): min=201, max=41251, avg=10005.47, stdev=17322.17 00:32:17.568 lat (usec): min=206, max=53998, avg=10055.92, stdev=17406.32 00:32:17.568 clat percentiles (usec): 00:32:17.568 | 1.00th=[ 217], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 281], 00:32:17.568 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 379], 60.00th=[ 433], 00:32:17.568 | 70.00th=[ 465], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:17.568 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:17.568 | 99.99th=[41157] 00:32:17.568 bw ( KiB/s): min= 96, max= 1584, per=8.25%, avg=444.00, stdev=579.91, samples=6 00:32:17.568 iops : min= 24, max= 396, avg=111.00, stdev=144.98, samples=6 00:32:17.568 lat (usec) : 250=4.62%, 500=67.92%, 750=3.18%, 1000=0.29% 00:32:17.568 lat (msec) : 50=23.70% 00:32:17.568 cpu : usr=0.09%, sys=0.09%, ctx=348, majf=0, minf=1 00:32:17.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 issued rwts: total=346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.568 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1278533: Fri Dec 6 19:30:28 2024 00:32:17.568 read: IOPS=933, BW=3731KiB/s (3821kB/s)(13.8MiB/3780msec) 00:32:17.568 slat (usec): min=3, max=10940, avg=16.21, stdev=246.10 00:32:17.568 clat (usec): min=187, max=53556, avg=1053.47, stdev=5711.63 00:32:17.568 lat (usec): min=192, max=64496, avg=1067.75, stdev=5745.30 00:32:17.568 clat percentiles (usec): 00:32:17.568 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:32:17.568 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:32:17.568 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 310], 95.00th=[ 375], 00:32:17.568 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:17.568 | 99.99th=[53740] 00:32:17.568 bw ( KiB/s): min= 96, max=15160, per=73.66%, avg=3963.43, stdev=6142.58, samples=7 00:32:17.568 iops : min= 24, max= 3790, avg=990.86, stdev=1535.65, samples=7 00:32:17.568 lat (usec) : 250=79.81%, 500=18.12% 00:32:17.568 lat (msec) : 2=0.03%, 10=0.03%, 50=1.96%, 100=0.03% 00:32:17.568 cpu : usr=0.50%, sys=1.03%, ctx=3530, majf=0, minf=1 00:32:17.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 issued rwts: total=3527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.568 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278534: Fri Dec 6 19:30:28 2024 00:32:17.568 read: IOPS=328, BW=1314KiB/s (1345kB/s)(4240KiB/3228msec) 00:32:17.568 slat (nsec): min=4308, max=54734, avg=13228.98, stdev=7887.69 00:32:17.568 clat (usec): min=195, max=41564, avg=3006.55, stdev=10108.50 00:32:17.568 lat (usec): min=203, max=41578, avg=3019.77, stdev=10110.58 00:32:17.568 clat percentiles (usec): 00:32:17.568 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:32:17.568 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 306], 00:32:17.568 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 523], 95.00th=[41157], 00:32:17.568 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:32:17.568 | 99.99th=[41681] 00:32:17.568 bw ( KiB/s): min= 96, max= 4656, per=23.76%, avg=1278.67, stdev=1796.18, samples=6 00:32:17.568 iops : min= 24, max= 1164, avg=319.67, stdev=449.05, samples=6 00:32:17.568 lat (usec) : 250=38.93%, 500=49.86%, 750=4.24% 00:32:17.568 lat (msec) : 2=0.09%, 4=0.09%, 20=0.09%, 50=6.60% 00:32:17.568 cpu : usr=0.09%, sys=0.71%, ctx=1061, majf=0, minf=2 00:32:17.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.568 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278535: Fri Dec 6 19:30:28 2024 00:32:17.568 read: IOPS=52, BW=209KiB/s (214kB/s)(612KiB/2934msec) 00:32:17.568 slat (nsec): min=7561, max=37493, avg=17051.90, stdev=8972.17 00:32:17.568 clat (usec): min=283, max=41561, avg=18967.83, stdev=20285.28 00:32:17.568 lat (usec): min=292, max=41579, avg=18984.90, stdev=20289.40 00:32:17.568 clat percentiles (usec): 00:32:17.568 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 371], 00:32:17.568 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 465], 60.00th=[41157], 00:32:17.568 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:17.568 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:17.568 | 99.99th=[41681] 00:32:17.568 bw ( KiB/s): min= 96, max= 544, per=4.24%, avg=228.80, stdev=195.35, samples=5 00:32:17.568 iops : min= 24, max= 136, avg=57.20, stdev=48.84, samples=5 00:32:17.568 lat (usec) : 500=53.25%, 750=0.65% 00:32:17.568 lat (msec) : 50=45.45% 00:32:17.568 cpu : usr=0.00%, sys=0.20%, ctx=154, majf=0, minf=2 00:32:17.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.568 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.568 00:32:17.568 Run status group 0 (all jobs): 00:32:17.568 READ: bw=5380KiB/s (5509kB/s), 209KiB/s-3731KiB/s (214kB/s-3821kB/s), io=19.9MiB (20.8MB), run=2934-3780msec 00:32:17.568 00:32:17.568 Disk stats (read/write): 00:32:17.568 nvme0n1: ios=342/0, merge=0/0, ticks=3329/0, in_queue=3329, util=95.71% 00:32:17.568 nvme0n2: ios=3521/0, merge=0/0, ticks=3502/0, in_queue=3502, util=96.20% 00:32:17.568 nvme0n3: ios=1057/0, merge=0/0, ticks=3053/0, in_queue=3053, util=96.82% 00:32:17.568 nvme0n4: ios=151/0, merge=0/0, ticks=2823/0, in_queue=2823, util=96.75% 00:32:17.826 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.826 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:18.085 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.085 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:18.343 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.343 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1278348 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:18.907 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:19.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:19.165 nvmf hotplug test: fio failed as expected 00:32:19.165 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.422 rmmod nvme_tcp 00:32:19.422 rmmod nvme_fabrics 00:32:19.422 rmmod nvme_keyring 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1276426 ']' 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1276426 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1276426 ']' 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1276426 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1276426 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1276426' 00:32:19.422 killing process with pid 1276426 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1276426 00:32:19.422 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1276426 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.680 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.219 00:32:22.219 real 0m23.702s 00:32:22.219 user 1m7.619s 00:32:22.219 sys 0m9.375s 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.219 ************************************ 00:32:22.219 END TEST nvmf_fio_target 00:32:22.219 ************************************ 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.219 ************************************ 00:32:22.219 START TEST nvmf_bdevio 00:32:22.219 ************************************ 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:22.219 * Looking for test storage... 00:32:22.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:22.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.219 --rc genhtml_branch_coverage=1 00:32:22.219 --rc genhtml_function_coverage=1 00:32:22.219 --rc genhtml_legend=1 00:32:22.219 --rc geninfo_all_blocks=1 00:32:22.219 --rc geninfo_unexecuted_blocks=1 00:32:22.219 00:32:22.219 ' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:22.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.219 --rc genhtml_branch_coverage=1 00:32:22.219 --rc genhtml_function_coverage=1 00:32:22.219 --rc genhtml_legend=1 00:32:22.219 --rc geninfo_all_blocks=1 00:32:22.219 --rc geninfo_unexecuted_blocks=1 00:32:22.219 00:32:22.219 ' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:22.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.219 --rc genhtml_branch_coverage=1 00:32:22.219 --rc genhtml_function_coverage=1 00:32:22.219 --rc genhtml_legend=1 00:32:22.219 --rc geninfo_all_blocks=1 00:32:22.219 --rc geninfo_unexecuted_blocks=1 00:32:22.219 00:32:22.219 ' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:22.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.219 --rc genhtml_branch_coverage=1 00:32:22.219 --rc genhtml_function_coverage=1 00:32:22.219 --rc genhtml_legend=1 00:32:22.219 --rc geninfo_all_blocks=1 00:32:22.219 --rc geninfo_unexecuted_blocks=1 00:32:22.219 00:32:22.219 ' 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.219 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.220 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.125 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:24.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:24.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:24.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:24.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:32:24.126 00:32:24.126 --- 10.0.0.2 ping statistics --- 00:32:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.126 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:32:24.126 00:32:24.126 --- 10.0.0.1 ping statistics --- 00:32:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.126 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.126 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1281158 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1281158 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1281158 ']' 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.127 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.385 [2024-12-06 19:30:34.743087] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.385 [2024-12-06 19:30:34.744257] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:24.385 [2024-12-06 19:30:34.744314] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.385 [2024-12-06 19:30:34.822142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.385 [2024-12-06 19:30:34.883656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.385 [2024-12-06 19:30:34.883743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.385 [2024-12-06 19:30:34.883758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.385 [2024-12-06 19:30:34.883769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.385 [2024-12-06 19:30:34.883779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.385 [2024-12-06 19:30:34.885482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:24.385 [2024-12-06 19:30:34.885541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:24.386 [2024-12-06 19:30:34.885606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:24.386 [2024-12-06 19:30:34.885610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.645 [2024-12-06 19:30:34.985215] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:24.645 [2024-12-06 19:30:34.985411] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:24.645 [2024-12-06 19:30:34.985735] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:24.645 [2024-12-06 19:30:34.986395] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:24.645 [2024-12-06 19:30:34.986596] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 [2024-12-06 19:30:35.042422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 Malloc0 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.645 [2024-12-06 19:30:35.106550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:24.645 { 00:32:24.645 "params": { 00:32:24.645 "name": "Nvme$subsystem", 00:32:24.645 "trtype": "$TEST_TRANSPORT", 00:32:24.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.645 "adrfam": "ipv4", 00:32:24.645 "trsvcid": "$NVMF_PORT", 00:32:24.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.645 "hdgst": ${hdgst:-false}, 00:32:24.645 "ddgst": ${ddgst:-false} 00:32:24.645 }, 00:32:24.645 "method": "bdev_nvme_attach_controller" 00:32:24.645 } 00:32:24.645 EOF 00:32:24.645 )") 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:24.645 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:24.645 "params": { 00:32:24.645 "name": "Nvme1", 00:32:24.645 "trtype": "tcp", 00:32:24.645 "traddr": "10.0.0.2", 00:32:24.645 "adrfam": "ipv4", 00:32:24.645 "trsvcid": "4420", 00:32:24.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:24.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:24.645 "hdgst": false, 00:32:24.645 "ddgst": false 00:32:24.645 }, 00:32:24.645 "method": "bdev_nvme_attach_controller" 00:32:24.645 }' 00:32:24.645 [2024-12-06 19:30:35.155237] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:24.645 [2024-12-06 19:30:35.155325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281192 ] 00:32:24.903 [2024-12-06 19:30:35.224577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:24.903 [2024-12-06 19:30:35.287105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.903 [2024-12-06 19:30:35.287155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.903 [2024-12-06 19:30:35.287160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.160 I/O targets: 00:32:25.160 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:25.160 00:32:25.160 00:32:25.160 CUnit - A unit testing framework for C - Version 2.1-3 00:32:25.160 http://cunit.sourceforge.net/ 00:32:25.160 00:32:25.160 00:32:25.160 Suite: bdevio tests on: Nvme1n1 00:32:25.160 Test: blockdev write read block ...passed 00:32:25.160 Test: blockdev write zeroes read block ...passed 00:32:25.160 Test: blockdev write zeroes read no split ...passed 00:32:25.418 Test: blockdev write zeroes read split ...passed 00:32:25.418 Test: blockdev write zeroes read split partial ...passed 00:32:25.418 Test: blockdev reset ...[2024-12-06 19:30:35.771081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:25.418 [2024-12-06 19:30:35.771212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe548c0 (9): Bad file descriptor 00:32:25.418 [2024-12-06 19:30:35.863782] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:25.418 passed 00:32:25.418 Test: blockdev write read 8 blocks ...passed 00:32:25.418 Test: blockdev write read size > 128k ...passed 00:32:25.418 Test: blockdev write read invalid size ...passed 00:32:25.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:25.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:25.418 Test: blockdev write read max offset ...passed 00:32:25.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:25.676 Test: blockdev writev readv 8 blocks ...passed 00:32:25.676 Test: blockdev writev readv 30 x 1block ...passed 00:32:25.676 Test: blockdev writev readv block ...passed 00:32:25.676 Test: blockdev writev readv size > 128k ...passed 00:32:25.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:25.676 Test: blockdev comparev and writev ...[2024-12-06 19:30:36.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.200984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.201002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.201424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.201452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.201476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.201493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.201905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.201931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.201952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.201968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.202380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.202406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:25.676 [2024-12-06 19:30:36.202429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.676 [2024-12-06 19:30:36.202445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:25.676 passed 00:32:25.935 Test: blockdev nvme passthru rw ...passed 00:32:25.935 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:30:36.284954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.935 [2024-12-06 19:30:36.284984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:25.935 [2024-12-06 19:30:36.285142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.935 [2024-12-06 19:30:36.285175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:25.935 [2024-12-06 19:30:36.285334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.935 [2024-12-06 19:30:36.285357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:25.935 [2024-12-06 19:30:36.285512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.935 [2024-12-06 19:30:36.285536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:25.935 passed 00:32:25.935 Test: blockdev nvme admin passthru ...passed 00:32:25.935 Test: blockdev copy ...passed 00:32:25.935 00:32:25.935 Run Summary: Type Total Ran Passed Failed Inactive 00:32:25.935 suites 1 1 n/a 0 0 00:32:25.935 tests 23 23 23 0 0 00:32:25.935 asserts 152 152 152 0 n/a 00:32:25.935 00:32:25.935 Elapsed time = 1.439 seconds 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.194 rmmod nvme_tcp 00:32:26.194 rmmod nvme_fabrics 00:32:26.194 rmmod nvme_keyring 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1281158 ']' 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1281158 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1281158 ']' 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1281158 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281158 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281158' 00:32:26.194 killing process with pid 1281158 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1281158 00:32:26.194 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1281158 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.452 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.987 00:32:28.987 real 0m6.634s 00:32:28.987 user 0m9.956s 00:32:28.987 sys 0m2.590s 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:28.987 ************************************ 00:32:28.987 END TEST nvmf_bdevio 00:32:28.987 ************************************ 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:28.987 00:32:28.987 real 3m55.760s 00:32:28.987 user 8m57.771s 00:32:28.987 sys 1m23.132s 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.987 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.987 ************************************ 00:32:28.987 END TEST nvmf_target_core_interrupt_mode 00:32:28.987 ************************************ 00:32:28.987 19:30:38 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.987 19:30:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:28.987 19:30:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.987 19:30:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.987 ************************************ 00:32:28.987 START TEST nvmf_interrupt 00:32:28.987 ************************************ 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.987 * Looking for test storage... 00:32:28.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:28.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.987 --rc genhtml_branch_coverage=1 00:32:28.987 --rc genhtml_function_coverage=1 00:32:28.987 --rc genhtml_legend=1 00:32:28.987 --rc geninfo_all_blocks=1 00:32:28.987 --rc geninfo_unexecuted_blocks=1 00:32:28.987 00:32:28.987 ' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:28.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.987 --rc genhtml_branch_coverage=1 00:32:28.987 --rc genhtml_function_coverage=1 00:32:28.987 --rc genhtml_legend=1 00:32:28.987 --rc geninfo_all_blocks=1 00:32:28.987 --rc geninfo_unexecuted_blocks=1 00:32:28.987 00:32:28.987 ' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:28.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.987 --rc genhtml_branch_coverage=1 00:32:28.987 --rc genhtml_function_coverage=1 00:32:28.987 --rc genhtml_legend=1 00:32:28.987 --rc geninfo_all_blocks=1 00:32:28.987 --rc geninfo_unexecuted_blocks=1 00:32:28.987 00:32:28.987 ' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:28.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.987 --rc genhtml_branch_coverage=1 00:32:28.987 --rc genhtml_function_coverage=1 00:32:28.987 --rc genhtml_legend=1 00:32:28.987 --rc geninfo_all_blocks=1 00:32:28.987 --rc geninfo_unexecuted_blocks=1 00:32:28.987 00:32:28.987 ' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.987 19:30:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.988 19:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:30.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:30.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.888 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:30.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:30.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:32:30.889 00:32:30.889 --- 10.0.0.2 ping statistics --- 00:32:30.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.889 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:32:30.889 00:32:30.889 --- 10.0.0.1 ping statistics --- 00:32:30.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.889 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.889 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1283400 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1283400 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1283400 ']' 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.148 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.148 [2024-12-06 19:30:41.534399] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.148 [2024-12-06 19:30:41.535430] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:31.148 [2024-12-06 19:30:41.535499] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.148 [2024-12-06 19:30:41.606948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.148 [2024-12-06 19:30:41.662799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.148 [2024-12-06 19:30:41.662857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.148 [2024-12-06 19:30:41.662882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.148 [2024-12-06 19:30:41.662894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.148 [2024-12-06 19:30:41.662903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.148 [2024-12-06 19:30:41.664390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.148 [2024-12-06 19:30:41.664396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.407 [2024-12-06 19:30:41.751435] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.407 [2024-12-06 19:30:41.751457] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.407 [2024-12-06 19:30:41.751687] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:31.407 5000+0 records in 00:32:31.407 5000+0 records out 00:32:31.407 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0141278 s, 725 MB/s 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 AIO0 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 [2024-12-06 19:30:41.845021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.407 [2024-12-06 19:30:41.873255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1283400 0 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 0 idle 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:31.407 19:30:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283400 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283400 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1283400 1 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 1 idle 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283406 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283406 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1283487 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1283400 0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1283400 0 busy 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:31.666 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283400 root 20 0 128.2g 48000 34944 S 6.7 0.1 0:00.27 reactor_0' 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283400 root 20 0 128.2g 48000 34944 S 6.7 0.1 0:00.27 reactor_0 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:31.924 19:30:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:32.859 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:32.859 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:32.859 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:32.859 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283400 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0' 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283400 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:33.117 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1283400 1 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1283400 1 busy 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:33.118 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283406 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.30 reactor_1' 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283406 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.30 reactor_1 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:33.376 19:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1283487 00:32:43.342 Initializing NVMe Controllers 00:32:43.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:43.342 Controller IO queue size 256, less than required. 00:32:43.342 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:43.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:43.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:43.342 Initialization complete. Launching workers. 00:32:43.342 ======================================================== 00:32:43.342 Latency(us) 00:32:43.342 Device Information : IOPS MiB/s Average min max 00:32:43.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14040.48 54.85 18245.41 4666.35 22911.06 00:32:43.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13902.78 54.31 18425.29 4721.62 22675.73 00:32:43.342 ======================================================== 00:32:43.342 Total : 27943.26 109.15 18334.91 4666.35 22911.06 00:32:43.343 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1283400 0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 0 idle 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283400 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283400 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1283400 1 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 1 idle 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283406 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283406 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.343 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:43.343 19:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:43.343 19:30:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:43.343 19:30:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:43.343 19:30:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:43.343 19:30:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1283400 0 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 0 idle 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:44.718 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283400 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.33 reactor_0' 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283400 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.33 reactor_0 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1283400 1 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1283400 1 idle 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1283400 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1283400 -w 256 00:32:44.719 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1283406 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1283406 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:44.976 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:44.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:44.977 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.234 rmmod nvme_tcp 00:32:45.234 rmmod nvme_fabrics 00:32:45.234 rmmod nvme_keyring 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1283400 ']' 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1283400 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1283400 ']' 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1283400 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1283400 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1283400' 00:32:45.234 killing process with pid 1283400 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1283400 00:32:45.234 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1283400 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.493 19:30:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.466 19:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.466 00:32:47.466 real 0m18.919s 00:32:47.466 user 0m37.696s 00:32:47.466 sys 0m6.292s 00:32:47.466 19:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.466 19:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.466 ************************************ 00:32:47.466 END TEST nvmf_interrupt 00:32:47.466 ************************************ 00:32:47.466 00:32:47.466 real 25m1.516s 00:32:47.466 user 58m25.154s 00:32:47.466 sys 6m45.576s 00:32:47.466 19:30:57 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.466 19:30:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.466 ************************************ 00:32:47.466 END TEST nvmf_tcp 00:32:47.466 ************************************ 00:32:47.466 19:30:57 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:47.466 19:30:57 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.466 19:30:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.466 19:30:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.466 19:30:57 -- common/autotest_common.sh@10 -- # set +x 00:32:47.466 ************************************ 00:32:47.466 START TEST spdkcli_nvmf_tcp 00:32:47.466 ************************************ 00:32:47.466 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.725 * Looking for test storage... 00:32:47.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.726 --rc genhtml_branch_coverage=1 00:32:47.726 --rc genhtml_function_coverage=1 00:32:47.726 --rc genhtml_legend=1 00:32:47.726 --rc geninfo_all_blocks=1 00:32:47.726 --rc geninfo_unexecuted_blocks=1 00:32:47.726 00:32:47.726 ' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.726 --rc genhtml_branch_coverage=1 00:32:47.726 --rc genhtml_function_coverage=1 00:32:47.726 --rc genhtml_legend=1 00:32:47.726 --rc geninfo_all_blocks=1 00:32:47.726 --rc geninfo_unexecuted_blocks=1 00:32:47.726 00:32:47.726 ' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.726 --rc genhtml_branch_coverage=1 00:32:47.726 --rc genhtml_function_coverage=1 00:32:47.726 --rc genhtml_legend=1 00:32:47.726 --rc geninfo_all_blocks=1 00:32:47.726 --rc geninfo_unexecuted_blocks=1 00:32:47.726 00:32:47.726 ' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.726 --rc genhtml_branch_coverage=1 00:32:47.726 --rc genhtml_function_coverage=1 00:32:47.726 --rc genhtml_legend=1 00:32:47.726 --rc geninfo_all_blocks=1 00:32:47.726 --rc geninfo_unexecuted_blocks=1 00:32:47.726 00:32:47.726 ' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.726 19:30:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1285453 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1285453 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1285453 ']' 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.727 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.727 [2024-12-06 19:30:58.218114] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:47.727 [2024-12-06 19:30:58.218202] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285453 ] 00:32:47.727 [2024-12-06 19:30:58.291345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.985 [2024-12-06 19:30:58.357888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.985 [2024-12-06 19:30:58.357892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.986 19:30:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:47.986 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:47.986 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:47.986 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:47.986 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:47.986 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:47.986 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:47.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:47.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:47.986 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:47.986 ' 00:32:51.271 [2024-12-06 19:31:01.118371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.837 [2024-12-06 19:31:02.386617] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:54.367 [2024-12-06 19:31:04.729838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:56.265 [2024-12-06 19:31:06.744038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:58.165 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:58.165 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:58.165 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.165 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.165 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:58.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:58.165 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:58.165 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.424 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:58.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:58.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:58.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:58.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:58.424 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:58.424 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:58.424 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:58.424 ' 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:03.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:03.684 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:03.684 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:03.684 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:03.684 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:03.684 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:03.684 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:03.685 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:03.685 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1285453 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1285453 ']' 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1285453 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1285453 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1285453' 00:33:03.943 killing process with pid 1285453 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1285453 00:33:03.943 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1285453 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1285453 ']' 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1285453 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1285453 ']' 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1285453 00:33:04.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1285453) - No such process 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1285453 is not found' 00:33:04.202 Process with pid 1285453 is not found 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:04.202 00:33:04.202 real 0m16.546s 00:33:04.202 user 0m35.160s 00:33:04.202 sys 0m0.775s 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.202 19:31:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.202 ************************************ 00:33:04.202 END TEST spdkcli_nvmf_tcp 00:33:04.202 ************************************ 00:33:04.202 19:31:14 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:04.202 19:31:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:04.202 19:31:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.202 19:31:14 -- common/autotest_common.sh@10 -- # set +x 00:33:04.202 ************************************ 00:33:04.202 START TEST nvmf_identify_passthru 00:33:04.202 ************************************ 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:04.202 * Looking for test storage... 00:33:04.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.202 19:31:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:04.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.202 --rc genhtml_branch_coverage=1 00:33:04.202 --rc genhtml_function_coverage=1 00:33:04.202 --rc genhtml_legend=1 00:33:04.202 --rc geninfo_all_blocks=1 00:33:04.202 --rc geninfo_unexecuted_blocks=1 00:33:04.202 00:33:04.202 ' 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:04.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.202 --rc genhtml_branch_coverage=1 00:33:04.202 --rc genhtml_function_coverage=1 00:33:04.202 --rc genhtml_legend=1 00:33:04.202 --rc geninfo_all_blocks=1 00:33:04.202 --rc geninfo_unexecuted_blocks=1 00:33:04.202 00:33:04.202 ' 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:04.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.202 --rc genhtml_branch_coverage=1 00:33:04.202 --rc genhtml_function_coverage=1 00:33:04.202 --rc genhtml_legend=1 00:33:04.202 --rc geninfo_all_blocks=1 00:33:04.202 --rc geninfo_unexecuted_blocks=1 00:33:04.202 00:33:04.202 ' 00:33:04.202 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:04.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.202 --rc genhtml_branch_coverage=1 00:33:04.202 --rc genhtml_function_coverage=1 00:33:04.202 --rc genhtml_legend=1 00:33:04.202 --rc geninfo_all_blocks=1 00:33:04.202 --rc geninfo_unexecuted_blocks=1 00:33:04.202 00:33:04.202 ' 00:33:04.202 19:31:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.202 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:04.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.203 19:31:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.203 19:31:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:04.203 19:31:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.203 19:31:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.203 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:04.203 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.203 19:31:14 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.203 19:31:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:06.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:06.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:06.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:06.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:06.732 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.733 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.733 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.733 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.733 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.733 19:31:16 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:33:06.733 00:33:06.733 --- 10.0.0.2 ping statistics --- 00:33:06.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.733 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:33:06.733 00:33:06.733 --- 10.0.0.1 ping statistics --- 00:33:06.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.733 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:06.733 19:31:17 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:33:06.733 19:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:06.733 19:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:10.917 19:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:10.917 19:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:10.917 19:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:10.917 19:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1290083 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.105 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1290083 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1290083 ']' 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.105 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.105 [2024-12-06 19:31:25.610156] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:15.105 [2024-12-06 19:31:25.610233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.362 [2024-12-06 19:31:25.681926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:15.362 [2024-12-06 19:31:25.740840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.362 [2024-12-06 19:31:25.740897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.363 [2024-12-06 19:31:25.740912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.363 [2024-12-06 19:31:25.740923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.363 [2024-12-06 19:31:25.740933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.363 [2024-12-06 19:31:25.742397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.363 [2024-12-06 19:31:25.742461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.363 [2024-12-06 19:31:25.742525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:15.363 [2024-12-06 19:31:25.742528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:15.363 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.363 INFO: Log level set to 20 00:33:15.363 INFO: Requests: 00:33:15.363 { 00:33:15.363 "jsonrpc": "2.0", 00:33:15.363 "method": "nvmf_set_config", 00:33:15.363 "id": 1, 00:33:15.363 "params": { 00:33:15.363 "admin_cmd_passthru": { 00:33:15.363 "identify_ctrlr": true 00:33:15.363 } 00:33:15.363 } 00:33:15.363 } 00:33:15.363 00:33:15.363 INFO: response: 00:33:15.363 { 00:33:15.363 "jsonrpc": "2.0", 00:33:15.363 "id": 1, 00:33:15.363 "result": true 00:33:15.363 } 00:33:15.363 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.363 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.363 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.363 INFO: Setting log level to 20 00:33:15.363 INFO: Setting log level to 20 00:33:15.363 INFO: Log level set to 20 00:33:15.363 INFO: Log level set to 20 00:33:15.363 INFO: Requests: 00:33:15.363 { 00:33:15.363 "jsonrpc": "2.0", 00:33:15.363 "method": "framework_start_init", 00:33:15.363 "id": 1 00:33:15.363 } 00:33:15.363 00:33:15.363 INFO: Requests: 00:33:15.363 { 00:33:15.363 "jsonrpc": "2.0", 00:33:15.363 "method": "framework_start_init", 00:33:15.363 "id": 1 00:33:15.363 } 00:33:15.363 00:33:15.620 [2024-12-06 19:31:25.953828] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:15.620 INFO: response: 00:33:15.620 { 00:33:15.620 "jsonrpc": "2.0", 00:33:15.620 "id": 1, 00:33:15.620 "result": true 00:33:15.620 } 00:33:15.620 00:33:15.620 INFO: response: 00:33:15.620 { 00:33:15.620 "jsonrpc": "2.0", 00:33:15.620 "id": 1, 00:33:15.620 "result": true 00:33:15.620 } 00:33:15.620 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.620 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.620 INFO: Setting log level to 40 00:33:15.620 INFO: Setting log level to 40 00:33:15.620 INFO: Setting log level to 40 00:33:15.620 [2024-12-06 19:31:25.963736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.620 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.620 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.620 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.901 Nvme0n1 00:33:18.901 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.901 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:18.901 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.901 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.901 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.901 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:18.901 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.902 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.902 [2024-12-06 19:31:28.861107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.902 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.902 [ 00:33:18.902 { 00:33:18.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:18.902 "subtype": "Discovery", 00:33:18.902 "listen_addresses": [], 00:33:18.902 "allow_any_host": true, 00:33:18.902 "hosts": [] 00:33:18.902 }, 00:33:18.902 { 00:33:18.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.902 "subtype": "NVMe", 00:33:18.902 "listen_addresses": [ 00:33:18.902 { 00:33:18.902 "trtype": "TCP", 00:33:18.902 "adrfam": "IPv4", 00:33:18.902 "traddr": "10.0.0.2", 00:33:18.902 "trsvcid": "4420" 00:33:18.902 } 00:33:18.902 ], 00:33:18.902 "allow_any_host": true, 00:33:18.902 "hosts": [], 00:33:18.902 "serial_number": "SPDK00000000000001", 00:33:18.902 "model_number": "SPDK bdev Controller", 00:33:18.902 "max_namespaces": 1, 00:33:18.902 "min_cntlid": 1, 00:33:18.902 "max_cntlid": 65519, 00:33:18.902 "namespaces": [ 00:33:18.902 { 00:33:18.902 "nsid": 1, 00:33:18.902 "bdev_name": "Nvme0n1", 00:33:18.902 "name": "Nvme0n1", 00:33:18.902 "nguid": "06584C7BE1AA4EE49B45732277848411", 00:33:18.902 "uuid": "06584c7b-e1aa-4ee4-9b45-732277848411" 00:33:18.902 } 00:33:18.902 ] 00:33:18.902 } 00:33:18.902 ] 00:33:18.902 19:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.902 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:18.902 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:18.902 19:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:18.902 19:31:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.902 rmmod nvme_tcp 00:33:18.902 rmmod nvme_fabrics 00:33:18.902 rmmod nvme_keyring 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1290083 ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1290083 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1290083 ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1290083 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1290083 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1290083' 00:33:18.902 killing process with pid 1290083 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1290083 00:33:18.902 19:31:29 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1290083 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.804 19:31:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.804 19:31:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:20.804 19:31:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.719 19:31:33 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.719 00:33:22.719 real 0m18.478s 00:33:22.719 user 0m27.005s 00:33:22.719 sys 0m3.260s 00:33:22.719 19:31:33 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.720 19:31:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.720 ************************************ 00:33:22.720 END TEST nvmf_identify_passthru 00:33:22.720 ************************************ 00:33:22.720 19:31:33 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.720 19:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:22.720 19:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.720 19:31:33 -- common/autotest_common.sh@10 -- # set +x 00:33:22.720 ************************************ 00:33:22.720 START TEST nvmf_dif 00:33:22.720 ************************************ 00:33:22.720 19:31:33 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.720 * Looking for test storage... 00:33:22.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.720 19:31:33 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.720 19:31:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.720 19:31:33 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.720 19:31:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.720 19:31:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.979 19:31:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:22.979 19:31:33 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.979 19:31:33 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.979 --rc genhtml_branch_coverage=1 00:33:22.979 --rc genhtml_function_coverage=1 00:33:22.980 --rc genhtml_legend=1 00:33:22.980 --rc geninfo_all_blocks=1 00:33:22.980 --rc geninfo_unexecuted_blocks=1 00:33:22.980 00:33:22.980 ' 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.980 --rc genhtml_branch_coverage=1 00:33:22.980 --rc genhtml_function_coverage=1 00:33:22.980 --rc genhtml_legend=1 00:33:22.980 --rc geninfo_all_blocks=1 00:33:22.980 --rc geninfo_unexecuted_blocks=1 00:33:22.980 00:33:22.980 ' 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.980 --rc genhtml_branch_coverage=1 00:33:22.980 --rc genhtml_function_coverage=1 00:33:22.980 --rc genhtml_legend=1 00:33:22.980 --rc geninfo_all_blocks=1 00:33:22.980 --rc geninfo_unexecuted_blocks=1 00:33:22.980 00:33:22.980 ' 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.980 --rc genhtml_branch_coverage=1 00:33:22.980 --rc genhtml_function_coverage=1 00:33:22.980 --rc genhtml_legend=1 00:33:22.980 --rc geninfo_all_blocks=1 00:33:22.980 --rc geninfo_unexecuted_blocks=1 00:33:22.980 00:33:22.980 ' 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.980 19:31:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.980 19:31:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.980 19:31:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.980 19:31:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.980 19:31:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.980 19:31:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.980 19:31:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.980 19:31:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:22.980 19:31:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:22.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:22.980 19:31:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.980 19:31:33 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.980 19:31:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.879 19:31:35 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.879 19:31:35 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.879 19:31:35 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:25.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:25.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:25.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:25.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:33:25.152 00:33:25.152 --- 10.0.0.2 ping statistics --- 00:33:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.152 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:33:25.152 00:33:25.152 --- 10.0.0.1 ping statistics --- 00:33:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.152 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:25.152 19:31:35 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:26.086 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.086 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:26.086 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.086 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.086 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.345 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.345 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.345 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.345 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.345 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.345 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.345 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.345 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.345 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.345 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.345 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.345 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.345 19:31:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:26.345 19:31:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1293377 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:26.345 19:31:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1293377 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1293377 ']' 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.345 19:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.604 [2024-12-06 19:31:36.956260] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:26.604 [2024-12-06 19:31:36.956343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.604 [2024-12-06 19:31:37.023766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.604 [2024-12-06 19:31:37.079636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.604 [2024-12-06 19:31:37.079707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.604 [2024-12-06 19:31:37.079737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.604 [2024-12-06 19:31:37.079749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.604 [2024-12-06 19:31:37.079760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.604 [2024-12-06 19:31:37.080326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:26.863 19:31:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.863 19:31:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.863 19:31:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:26.863 19:31:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.863 [2024-12-06 19:31:37.259839] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.863 19:31:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.863 19:31:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.863 ************************************ 00:33:26.863 START TEST fio_dif_1_default 00:33:26.863 ************************************ 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.863 bdev_null0 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.863 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.864 [2024-12-06 19:31:37.316143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.864 { 00:33:26.864 "params": { 00:33:26.864 "name": "Nvme$subsystem", 00:33:26.864 "trtype": "$TEST_TRANSPORT", 00:33:26.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.864 "adrfam": "ipv4", 00:33:26.864 "trsvcid": "$NVMF_PORT", 00:33:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.864 "hdgst": ${hdgst:-false}, 00:33:26.864 "ddgst": ${ddgst:-false} 00:33:26.864 }, 00:33:26.864 "method": "bdev_nvme_attach_controller" 00:33:26.864 } 00:33:26.864 EOF 00:33:26.864 )") 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:26.864 "params": { 00:33:26.864 "name": "Nvme0", 00:33:26.864 "trtype": "tcp", 00:33:26.864 "traddr": "10.0.0.2", 00:33:26.864 "adrfam": "ipv4", 00:33:26.864 "trsvcid": "4420", 00:33:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.864 "hdgst": false, 00:33:26.864 "ddgst": false 00:33:26.864 }, 00:33:26.864 "method": "bdev_nvme_attach_controller" 00:33:26.864 }' 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:26.864 19:31:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.122 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:27.122 fio-3.35 00:33:27.122 Starting 1 thread 00:33:39.342 00:33:39.342 filename0: (groupid=0, jobs=1): err= 0: pid=1293606: Fri Dec 6 19:31:48 2024 00:33:39.342 read: IOPS=198, BW=793KiB/s (812kB/s)(7936KiB/10013msec) 00:33:39.342 slat (nsec): min=6724, max=73972, avg=8559.48, stdev=3293.13 00:33:39.342 clat (usec): min=526, max=45485, avg=20160.02, stdev=20292.75 00:33:39.342 lat (usec): min=533, max=45520, avg=20168.58, stdev=20292.77 00:33:39.342 clat percentiles (usec): 00:33:39.342 | 1.00th=[ 586], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 668], 00:33:39.342 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[41157], 00:33:39.342 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:39.342 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:33:39.342 | 99.99th=[45351] 00:33:39.342 bw ( KiB/s): min= 672, max= 960, per=99.93%, avg=792.05, stdev=66.31, samples=20 00:33:39.342 iops : min= 168, max= 240, avg=198.00, stdev=16.59, samples=20 00:33:39.342 lat (usec) : 750=50.76%, 1000=1.26% 00:33:39.342 lat (msec) : 50=47.98% 00:33:39.342 cpu : usr=91.01%, sys=8.72%, ctx=19, majf=0, minf=296 00:33:39.342 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.342 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.342 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:39.342 00:33:39.342 Run status group 0 (all jobs): 00:33:39.342 READ: bw=793KiB/s (812kB/s), 793KiB/s-793KiB/s (812kB/s-812kB/s), io=7936KiB (8126kB), run=10013-10013msec 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 00:33:39.342 real 0m11.067s 00:33:39.342 user 0m10.211s 00:33:39.342 sys 0m1.134s 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 ************************************ 00:33:39.342 END TEST fio_dif_1_default 00:33:39.342 ************************************ 00:33:39.342 19:31:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:39.342 19:31:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:39.342 19:31:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 ************************************ 00:33:39.342 START TEST fio_dif_1_multi_subsystems 00:33:39.342 ************************************ 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 bdev_null0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 [2024-12-06 19:31:48.421018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 bdev_null1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.342 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.343 { 00:33:39.343 "params": { 00:33:39.343 "name": "Nvme$subsystem", 00:33:39.343 "trtype": "$TEST_TRANSPORT", 00:33:39.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.343 "adrfam": "ipv4", 00:33:39.343 "trsvcid": "$NVMF_PORT", 00:33:39.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.343 "hdgst": ${hdgst:-false}, 00:33:39.343 "ddgst": ${ddgst:-false} 00:33:39.343 }, 00:33:39.343 "method": "bdev_nvme_attach_controller" 00:33:39.343 } 00:33:39.343 EOF 00:33:39.343 )") 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.343 { 00:33:39.343 "params": { 00:33:39.343 "name": "Nvme$subsystem", 00:33:39.343 "trtype": "$TEST_TRANSPORT", 00:33:39.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.343 "adrfam": "ipv4", 00:33:39.343 "trsvcid": "$NVMF_PORT", 00:33:39.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.343 "hdgst": ${hdgst:-false}, 00:33:39.343 "ddgst": ${ddgst:-false} 00:33:39.343 }, 00:33:39.343 "method": "bdev_nvme_attach_controller" 00:33:39.343 } 00:33:39.343 EOF 00:33:39.343 )") 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.343 "params": { 00:33:39.343 "name": "Nvme0", 00:33:39.343 "trtype": "tcp", 00:33:39.343 "traddr": "10.0.0.2", 00:33:39.343 "adrfam": "ipv4", 00:33:39.343 "trsvcid": "4420", 00:33:39.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.343 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.343 "hdgst": false, 00:33:39.343 "ddgst": false 00:33:39.343 }, 00:33:39.343 "method": "bdev_nvme_attach_controller" 00:33:39.343 },{ 00:33:39.343 "params": { 00:33:39.343 "name": "Nvme1", 00:33:39.343 "trtype": "tcp", 00:33:39.343 "traddr": "10.0.0.2", 00:33:39.343 "adrfam": "ipv4", 00:33:39.343 "trsvcid": "4420", 00:33:39.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.343 "hdgst": false, 00:33:39.343 "ddgst": false 00:33:39.343 }, 00:33:39.343 "method": "bdev_nvme_attach_controller" 00:33:39.343 }' 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.343 19:31:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.343 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:39.343 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:39.343 fio-3.35 00:33:39.343 Starting 2 threads 00:33:49.311 00:33:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=1295006: Fri Dec 6 19:31:59 2024 00:33:49.311 read: IOPS=201, BW=805KiB/s (824kB/s)(8080KiB/10040msec) 00:33:49.311 slat (nsec): min=6962, max=78292, avg=9216.59, stdev=3804.60 00:33:49.311 clat (usec): min=555, max=42495, avg=19851.08, stdev=20330.42 00:33:49.311 lat (usec): min=563, max=42506, avg=19860.29, stdev=20330.01 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 619], 00:33:49.311 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 758], 60.00th=[41157], 00:33:49.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:49.311 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:49.311 | 99.99th=[42730] 00:33:49.311 bw ( KiB/s): min= 768, max= 1024, per=67.35%, avg=806.40, stdev=66.96, samples=20 00:33:49.311 iops : min= 192, max= 256, avg=201.60, stdev=16.74, samples=20 00:33:49.311 lat (usec) : 750=49.70%, 1000=2.77% 00:33:49.311 lat (msec) : 2=0.20%, 4=0.20%, 50=47.13% 00:33:49.311 cpu : usr=94.48%, sys=5.20%, ctx=14, majf=0, minf=237 00:33:49.311 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:49.311 filename1: (groupid=0, jobs=1): err= 0: pid=1295007: Fri Dec 6 19:31:59 2024 00:33:49.311 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10027msec) 00:33:49.311 slat (nsec): min=7031, max=34291, avg=9280.86, stdev=3215.47 00:33:49.311 clat (usec): min=703, max=44082, avg=40727.99, stdev=3635.36 00:33:49.311 lat (usec): min=711, max=44111, avg=40737.27, stdev=3635.33 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:49.311 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:49.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:49.311 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:49.311 | 99.99th=[44303] 00:33:49.311 bw ( KiB/s): min= 384, max= 416, per=32.67%, avg=392.00, stdev=14.22, samples=20 00:33:49.311 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:33:49.311 lat (usec) : 750=0.41%, 1000=0.41% 00:33:49.311 lat (msec) : 50=99.19% 00:33:49.311 cpu : usr=94.72%, sys=4.96%, ctx=12, majf=0, minf=116 00:33:49.311 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:49.311 00:33:49.311 Run status group 0 (all jobs): 00:33:49.311 READ: bw=1197KiB/s (1226kB/s), 393KiB/s-805KiB/s (402kB/s-824kB/s), io=11.7MiB (12.3MB), run=10027-10040msec 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.311 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 00:33:49.312 real 0m11.391s 00:33:49.312 user 0m20.381s 00:33:49.312 sys 0m1.294s 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 ************************************ 00:33:49.312 END TEST fio_dif_1_multi_subsystems 00:33:49.312 ************************************ 00:33:49.312 19:31:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:49.312 19:31:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.312 19:31:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 ************************************ 00:33:49.312 START TEST fio_dif_rand_params 00:33:49.312 ************************************ 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 bdev_null0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 [2024-12-06 19:31:59.869523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.312 { 00:33:49.312 "params": { 00:33:49.312 "name": "Nvme$subsystem", 00:33:49.312 "trtype": "$TEST_TRANSPORT", 00:33:49.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.312 "adrfam": "ipv4", 00:33:49.312 "trsvcid": "$NVMF_PORT", 00:33:49.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.312 "hdgst": ${hdgst:-false}, 00:33:49.312 "ddgst": ${ddgst:-false} 00:33:49.312 }, 00:33:49.312 "method": "bdev_nvme_attach_controller" 00:33:49.312 } 00:33:49.312 EOF 00:33:49.312 )") 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:49.312 19:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.312 "params": { 00:33:49.312 "name": "Nvme0", 00:33:49.312 "trtype": "tcp", 00:33:49.312 "traddr": "10.0.0.2", 00:33:49.312 "adrfam": "ipv4", 00:33:49.312 "trsvcid": "4420", 00:33:49.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.312 "hdgst": false, 00:33:49.312 "ddgst": false 00:33:49.312 }, 00:33:49.312 "method": "bdev_nvme_attach_controller" 00:33:49.312 }' 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.571 19:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.571 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:49.571 ... 00:33:49.571 fio-3.35 00:33:49.571 Starting 3 threads 00:33:56.125 00:33:56.125 filename0: (groupid=0, jobs=1): err= 0: pid=1296488: Fri Dec 6 19:32:05 2024 00:33:56.125 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(150MiB/5047msec) 00:33:56.125 slat (nsec): min=4355, max=84892, avg=13946.16, stdev=2709.94 00:33:56.125 clat (usec): min=6365, max=55123, avg=12539.98, stdev=5093.55 00:33:56.125 lat (usec): min=6378, max=55138, avg=12553.93, stdev=5093.41 00:33:56.125 clat percentiles (usec): 00:33:56.125 | 1.00th=[ 8291], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10945], 00:33:56.125 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:33:56.125 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[14353], 00:33:56.125 | 99.00th=[48497], 99.50th=[51119], 99.90th=[54789], 99.95th=[55313], 00:33:56.125 | 99.99th=[55313] 00:33:56.125 bw ( KiB/s): min=23296, max=34048, per=35.16%, avg=30720.00, stdev=2871.06, samples=10 00:33:56.125 iops : min= 182, max= 266, avg=240.00, stdev=22.43, samples=10 00:33:56.125 lat (msec) : 10=6.66%, 20=91.68%, 50=0.92%, 100=0.75% 00:33:56.125 cpu : usr=93.62%, sys=5.89%, ctx=11, majf=0, minf=143 00:33:56.125 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.125 issued rwts: total=1202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.125 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.125 filename0: (groupid=0, jobs=1): err= 0: pid=1296489: Fri Dec 6 19:32:05 2024 00:33:56.125 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(142MiB/5005msec) 00:33:56.125 slat (nsec): min=4305, max=35343, avg=14594.67, stdev=2936.82 00:33:56.125 clat (usec): min=4924, max=53814, avg=13230.34, stdev=3614.50 00:33:56.125 lat (usec): min=4946, max=53828, avg=13244.94, stdev=3614.62 00:33:56.125 clat percentiles (usec): 00:33:56.125 | 1.00th=[ 5211], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[11731], 00:33:56.125 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:33:56.125 | 70.00th=[14353], 80.00th=[15008], 90.00th=[15795], 95.00th=[16581], 00:33:56.125 | 99.00th=[17695], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:33:56.125 | 99.99th=[53740] 00:33:56.125 bw ( KiB/s): min=26880, max=30720, per=33.14%, avg=28953.60, stdev=1128.53, samples=10 00:33:56.125 iops : min= 210, max= 240, avg=226.20, stdev= 8.82, samples=10 00:33:56.125 lat (msec) : 10=9.44%, 20=90.03%, 50=0.26%, 100=0.26% 00:33:56.125 cpu : usr=90.71%, sys=6.89%, ctx=292, majf=0, minf=47 00:33:56.125 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.126 issued rwts: total=1133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.126 filename0: (groupid=0, jobs=1): err= 0: pid=1296490: Fri Dec 6 19:32:05 2024 00:33:56.126 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(139MiB/5005msec) 00:33:56.126 slat (nsec): min=4531, max=42426, avg=15245.84, stdev=3547.06 00:33:56.126 clat (usec): min=4585, max=56403, avg=13504.48, stdev=4674.16 00:33:56.126 lat (usec): min=4599, max=56419, avg=13519.72, stdev=4674.29 00:33:56.126 clat percentiles (usec): 00:33:56.126 | 1.00th=[ 6063], 5.00th=[ 8848], 10.00th=[10945], 20.00th=[11731], 00:33:56.126 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:33:56.126 | 70.00th=[14222], 80.00th=[14877], 90.00th=[15926], 95.00th=[16581], 00:33:56.126 | 99.00th=[49021], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:33:56.126 | 99.99th=[56361] 00:33:56.126 bw ( KiB/s): min=24320, max=29952, per=32.46%, avg=28364.80, stdev=1627.16, samples=10 00:33:56.126 iops : min= 190, max= 234, avg=221.60, stdev=12.71, samples=10 00:33:56.126 lat (msec) : 10=7.84%, 20=91.08%, 50=0.09%, 100=0.99% 00:33:56.126 cpu : usr=89.73%, sys=7.61%, ctx=279, majf=0, minf=82 00:33:56.126 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.126 issued rwts: total=1110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.126 00:33:56.126 Run status group 0 (all jobs): 00:33:56.126 READ: bw=85.3MiB/s (89.5MB/s), 27.7MiB/s-29.8MiB/s (29.1MB/s-31.2MB/s), io=431MiB (452MB), run=5005-5047msec 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 bdev_null0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 [2024-12-06 19:32:06.162736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 bdev_null1 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 bdev_null2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.126 { 00:33:56.126 "params": { 00:33:56.126 "name": "Nvme$subsystem", 00:33:56.126 "trtype": "$TEST_TRANSPORT", 00:33:56.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.126 "adrfam": "ipv4", 00:33:56.126 "trsvcid": "$NVMF_PORT", 00:33:56.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.126 "hdgst": ${hdgst:-false}, 00:33:56.126 "ddgst": ${ddgst:-false} 00:33:56.126 }, 00:33:56.126 "method": "bdev_nvme_attach_controller" 00:33:56.126 } 00:33:56.126 EOF 00:33:56.126 )") 00:33:56.126 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.127 { 00:33:56.127 "params": { 00:33:56.127 "name": "Nvme$subsystem", 00:33:56.127 "trtype": "$TEST_TRANSPORT", 00:33:56.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.127 "adrfam": "ipv4", 00:33:56.127 "trsvcid": "$NVMF_PORT", 00:33:56.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.127 "hdgst": ${hdgst:-false}, 00:33:56.127 "ddgst": ${ddgst:-false} 00:33:56.127 }, 00:33:56.127 "method": "bdev_nvme_attach_controller" 00:33:56.127 } 00:33:56.127 EOF 00:33:56.127 )") 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.127 { 00:33:56.127 "params": { 00:33:56.127 "name": "Nvme$subsystem", 00:33:56.127 "trtype": "$TEST_TRANSPORT", 00:33:56.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.127 "adrfam": "ipv4", 00:33:56.127 "trsvcid": "$NVMF_PORT", 00:33:56.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.127 "hdgst": ${hdgst:-false}, 00:33:56.127 "ddgst": ${ddgst:-false} 00:33:56.127 }, 00:33:56.127 "method": "bdev_nvme_attach_controller" 00:33:56.127 } 00:33:56.127 EOF 00:33:56.127 )") 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.127 "params": { 00:33:56.127 "name": "Nvme0", 00:33:56.127 "trtype": "tcp", 00:33:56.127 "traddr": "10.0.0.2", 00:33:56.127 "adrfam": "ipv4", 00:33:56.127 "trsvcid": "4420", 00:33:56.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.127 "hdgst": false, 00:33:56.127 "ddgst": false 00:33:56.127 }, 00:33:56.127 "method": "bdev_nvme_attach_controller" 00:33:56.127 },{ 00:33:56.127 "params": { 00:33:56.127 "name": "Nvme1", 00:33:56.127 "trtype": "tcp", 00:33:56.127 "traddr": "10.0.0.2", 00:33:56.127 "adrfam": "ipv4", 00:33:56.127 "trsvcid": "4420", 00:33:56.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.127 "hdgst": false, 00:33:56.127 "ddgst": false 00:33:56.127 }, 00:33:56.127 "method": "bdev_nvme_attach_controller" 00:33:56.127 },{ 00:33:56.127 "params": { 00:33:56.127 "name": "Nvme2", 00:33:56.127 "trtype": "tcp", 00:33:56.127 "traddr": "10.0.0.2", 00:33:56.127 "adrfam": "ipv4", 00:33:56.127 "trsvcid": "4420", 00:33:56.127 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:56.127 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:56.127 "hdgst": false, 00:33:56.127 "ddgst": false 00:33:56.127 }, 00:33:56.127 "method": "bdev_nvme_attach_controller" 00:33:56.127 }' 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.127 19:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.127 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.127 ... 00:33:56.127 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.127 ... 00:33:56.127 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.127 ... 00:33:56.127 fio-3.35 00:33:56.127 Starting 24 threads 00:34:08.334 00:34:08.334 filename0: (groupid=0, jobs=1): err= 0: pid=1297328: Fri Dec 6 19:32:17 2024 00:34:08.334 read: IOPS=452, BW=1808KiB/s (1852kB/s)(17.7MiB/10030msec) 00:34:08.334 slat (nsec): min=7042, max=73923, avg=27368.85, stdev=12435.81 00:34:08.334 clat (usec): min=14446, max=56579, avg=35162.28, stdev=4431.58 00:34:08.334 lat (usec): min=14456, max=56597, avg=35189.65, stdev=4429.18 00:34:08.334 clat percentiles (usec): 00:34:08.334 | 1.00th=[18744], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.334 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.334 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.334 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:34:08.334 | 99.99th=[56361] 00:34:08.334 bw ( KiB/s): min= 1408, max= 2096, per=4.19%, avg=1807.20, stdev=179.49, samples=20 00:34:08.334 iops : min= 352, max= 524, avg=451.80, stdev=44.87, samples=20 00:34:08.334 lat (msec) : 20=1.08%, 50=98.88%, 100=0.04% 00:34:08.334 cpu : usr=97.33%, sys=1.75%, ctx=122, majf=0, minf=57 00:34:08.334 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:08.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 issued rwts: total=4534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.334 filename0: (groupid=0, jobs=1): err= 0: pid=1297329: Fri Dec 6 19:32:17 2024 00:34:08.334 read: IOPS=450, BW=1802KiB/s (1846kB/s)(17.6MiB/10013msec) 00:34:08.334 slat (nsec): min=6166, max=74840, avg=34640.79, stdev=9687.36 00:34:08.334 clat (usec): min=12890, max=66642, avg=35222.61, stdev=4518.28 00:34:08.334 lat (usec): min=12947, max=66675, avg=35257.25, stdev=4518.88 00:34:08.334 clat percentiles (usec): 00:34:08.334 | 1.00th=[23200], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.334 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.334 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.334 | 99.00th=[45351], 99.50th=[47449], 99.90th=[66323], 99.95th=[66323], 00:34:08.334 | 99.99th=[66847] 00:34:08.334 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=186.33, samples=20 00:34:08.334 iops : min= 352, max= 480, avg=449.60, stdev=46.58, samples=20 00:34:08.334 lat (msec) : 20=0.62%, 50=98.94%, 100=0.44% 00:34:08.334 cpu : usr=98.28%, sys=1.27%, ctx=18, majf=0, minf=86 00:34:08.334 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:08.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.334 filename0: (groupid=0, jobs=1): err= 0: pid=1297330: Fri Dec 6 19:32:17 2024 00:34:08.334 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:34:08.334 slat (nsec): min=11862, max=96307, avg=42926.59, stdev=13492.90 00:34:08.334 clat (usec): min=28250, max=67941, avg=35342.58, stdev=4317.78 00:34:08.334 lat (usec): min=28267, max=67983, avg=35385.51, stdev=4317.17 00:34:08.334 clat percentiles (usec): 00:34:08.334 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.334 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.334 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.334 | 99.00th=[45351], 99.50th=[45351], 99.90th=[67634], 99.95th=[67634], 00:34:08.334 | 99.99th=[67634] 00:34:08.334 bw ( KiB/s): min= 1402, max= 1920, per=4.14%, avg=1785.45, stdev=174.18, samples=20 00:34:08.334 iops : min= 350, max= 480, avg=446.30, stdev=43.63, samples=20 00:34:08.334 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.334 cpu : usr=97.50%, sys=1.59%, ctx=126, majf=0, minf=39 00:34:08.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.334 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.334 filename0: (groupid=0, jobs=1): err= 0: pid=1297331: Fri Dec 6 19:32:17 2024 00:34:08.334 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:34:08.334 slat (nsec): min=15066, max=99187, avg=44306.98, stdev=13868.14 00:34:08.334 clat (usec): min=28238, max=71197, avg=35351.00, stdev=4334.53 00:34:08.334 lat (usec): min=28271, max=71242, avg=35395.31, stdev=4333.97 00:34:08.334 clat percentiles (usec): 00:34:08.334 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.334 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.334 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.334 | 99.00th=[45351], 99.50th=[45351], 99.90th=[68682], 99.95th=[68682], 00:34:08.334 | 99.99th=[70779] 00:34:08.334 bw ( KiB/s): min= 1402, max= 1920, per=4.14%, avg=1785.45, stdev=174.18, samples=20 00:34:08.334 iops : min= 350, max= 480, avg=446.30, stdev=43.63, samples=20 00:34:08.334 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.334 cpu : usr=97.98%, sys=1.35%, ctx=126, majf=0, minf=40 00:34:08.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename0: (groupid=0, jobs=1): err= 0: pid=1297332: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10003msec) 00:34:08.335 slat (usec): min=4, max=119, avg=43.37, stdev=27.94 00:34:08.335 clat (usec): min=32218, max=57369, avg=35341.47, stdev=3986.60 00:34:08.335 lat (usec): min=32295, max=57381, avg=35384.84, stdev=3987.93 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45351], 99.90th=[54789], 99.95th=[54789], 00:34:08.335 | 99.99th=[57410] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1920, per=4.19%, avg=1805.63, stdev=164.52, samples=19 00:34:08.335 iops : min= 352, max= 480, avg=451.37, stdev=41.17, samples=19 00:34:08.335 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.335 cpu : usr=97.93%, sys=1.40%, ctx=95, majf=0, minf=43 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename0: (groupid=0, jobs=1): err= 0: pid=1297333: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=450, BW=1802KiB/s (1846kB/s)(17.6MiB/10014msec) 00:34:08.335 slat (nsec): min=6103, max=95060, avg=37886.27, stdev=12343.28 00:34:08.335 clat (usec): min=16181, max=65279, avg=35167.34, stdev=4191.11 00:34:08.335 lat (usec): min=16201, max=65325, avg=35205.23, stdev=4190.96 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[24511], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47449], 99.95th=[50594], 00:34:08.335 | 99.99th=[65274] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=183.27, samples=20 00:34:08.335 iops : min= 352, max= 480, avg=449.60, stdev=45.82, samples=20 00:34:08.335 lat (msec) : 20=0.40%, 50=99.51%, 100=0.09% 00:34:08.335 cpu : usr=97.84%, sys=1.58%, ctx=52, majf=0, minf=44 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename0: (groupid=0, jobs=1): err= 0: pid=1297334: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10023msec) 00:34:08.335 slat (usec): min=14, max=119, avg=43.37, stdev=14.22 00:34:08.335 clat (usec): min=28318, max=47176, avg=35294.45, stdev=3923.28 00:34:08.335 lat (usec): min=28363, max=47215, avg=35337.82, stdev=3921.97 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:34:08.335 | 99.99th=[46924] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1789.75, stdev=170.02, samples=20 00:34:08.335 iops : min= 352, max= 480, avg=447.40, stdev=42.58, samples=20 00:34:08.335 lat (msec) : 50=100.00% 00:34:08.335 cpu : usr=97.64%, sys=1.59%, ctx=117, majf=0, minf=53 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename0: (groupid=0, jobs=1): err= 0: pid=1297335: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.6MiB/10028msec) 00:34:08.335 slat (usec): min=5, max=128, avg=43.73, stdev=18.72 00:34:08.335 clat (usec): min=28361, max=51597, avg=35310.25, stdev=3977.94 00:34:08.335 lat (usec): min=28393, max=51612, avg=35353.98, stdev=3975.92 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.335 | 99.00th=[44827], 99.50th=[45351], 99.90th=[51643], 99.95th=[51643], 00:34:08.335 | 99.99th=[51643] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1789.00, stdev=166.33, samples=20 00:34:08.335 iops : min= 352, max= 480, avg=447.25, stdev=41.58, samples=20 00:34:08.335 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.335 cpu : usr=96.94%, sys=1.99%, ctx=194, majf=0, minf=44 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename1: (groupid=0, jobs=1): err= 0: pid=1297336: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10010msec) 00:34:08.335 slat (usec): min=10, max=104, avg=45.56, stdev=15.97 00:34:08.335 clat (usec): min=28272, max=69553, avg=35321.97, stdev=4375.34 00:34:08.335 lat (usec): min=28305, max=69588, avg=35367.53, stdev=4373.83 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:34:08.335 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45351], 99.90th=[69731], 99.95th=[69731], 00:34:08.335 | 99.99th=[69731] 00:34:08.335 bw ( KiB/s): min= 1402, max= 1920, per=4.14%, avg=1785.30, stdev=174.29, samples=20 00:34:08.335 iops : min= 350, max= 480, avg=446.30, stdev=43.63, samples=20 00:34:08.335 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.335 cpu : usr=98.38%, sys=1.22%, ctx=22, majf=0, minf=46 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename1: (groupid=0, jobs=1): err= 0: pid=1297337: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=450, BW=1802KiB/s (1846kB/s)(17.6MiB/10013msec) 00:34:08.335 slat (nsec): min=8680, max=66009, avg=34179.19, stdev=8224.48 00:34:08.335 clat (usec): min=12828, max=47396, avg=35207.21, stdev=4142.83 00:34:08.335 lat (usec): min=12882, max=47443, avg=35241.39, stdev=4143.12 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[24249], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:34:08.335 | 99.99th=[47449] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=183.27, samples=20 00:34:08.335 iops : min= 352, max= 480, avg=449.60, stdev=45.82, samples=20 00:34:08.335 lat (msec) : 20=0.40%, 50=99.60% 00:34:08.335 cpu : usr=98.24%, sys=1.34%, ctx=22, majf=0, minf=53 00:34:08.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename1: (groupid=0, jobs=1): err= 0: pid=1297339: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=450, BW=1802KiB/s (1845kB/s)(17.6MiB/10015msec) 00:34:08.335 slat (usec): min=5, max=110, avg=56.90, stdev=19.78 00:34:08.335 clat (usec): min=15223, max=60848, avg=35006.37, stdev=4230.05 00:34:08.335 lat (usec): min=15273, max=60905, avg=35063.27, stdev=4229.91 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[25035], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:34:08.335 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[44303], 00:34:08.335 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47449], 99.95th=[55837], 00:34:08.335 | 99.99th=[61080] 00:34:08.335 bw ( KiB/s): min= 1408, max= 1936, per=4.17%, avg=1798.40, stdev=183.34, samples=20 00:34:08.335 iops : min= 352, max= 484, avg=449.60, stdev=45.84, samples=20 00:34:08.335 lat (msec) : 20=0.44%, 50=99.47%, 100=0.09% 00:34:08.335 cpu : usr=98.31%, sys=1.24%, ctx=10, majf=0, minf=39 00:34:08.335 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:34:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.335 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.335 filename1: (groupid=0, jobs=1): err= 0: pid=1297340: Fri Dec 6 19:32:17 2024 00:34:08.335 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10027msec) 00:34:08.335 slat (usec): min=8, max=135, avg=41.27, stdev=28.27 00:34:08.335 clat (usec): min=28729, max=50822, avg=35330.08, stdev=4031.81 00:34:08.335 lat (usec): min=28755, max=50852, avg=35371.34, stdev=4021.93 00:34:08.335 clat percentiles (usec): 00:34:08.335 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:34:08.335 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.335 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:34:08.336 | 99.99th=[50594] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1789.15, stdev=166.21, samples=20 00:34:08.336 iops : min= 352, max= 480, avg=447.25, stdev=41.58, samples=20 00:34:08.336 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.336 cpu : usr=97.84%, sys=1.38%, ctx=115, majf=0, minf=57 00:34:08.336 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename1: (groupid=0, jobs=1): err= 0: pid=1297341: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.6MiB/10028msec) 00:34:08.336 slat (usec): min=9, max=131, avg=43.79, stdev=15.05 00:34:08.336 clat (usec): min=28361, max=51541, avg=35289.88, stdev=3958.41 00:34:08.336 lat (usec): min=28423, max=51565, avg=35333.66, stdev=3957.65 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[44827], 99.50th=[45351], 99.90th=[51643], 99.95th=[51643], 00:34:08.336 | 99.99th=[51643] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1789.00, stdev=166.33, samples=20 00:34:08.336 iops : min= 352, max= 480, avg=447.25, stdev=41.58, samples=20 00:34:08.336 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.336 cpu : usr=98.58%, sys=1.00%, ctx=15, majf=0, minf=48 00:34:08.336 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename1: (groupid=0, jobs=1): err= 0: pid=1297342: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:34:08.336 slat (nsec): min=12427, max=99800, avg=42495.09, stdev=12384.49 00:34:08.336 clat (usec): min=28306, max=67552, avg=35350.03, stdev=4307.27 00:34:08.336 lat (usec): min=28319, max=67586, avg=35392.52, stdev=4306.48 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[45351], 99.90th=[67634], 99.95th=[67634], 00:34:08.336 | 99.99th=[67634] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1785.60, stdev=173.60, samples=20 00:34:08.336 iops : min= 352, max= 480, avg=446.40, stdev=43.40, samples=20 00:34:08.336 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.336 cpu : usr=98.48%, sys=1.10%, ctx=57, majf=0, minf=50 00:34:08.336 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename1: (groupid=0, jobs=1): err= 0: pid=1297343: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=449, BW=1799KiB/s (1843kB/s)(17.6MiB/10030msec) 00:34:08.336 slat (nsec): min=8076, max=71125, avg=34078.47, stdev=10198.64 00:34:08.336 clat (usec): min=14404, max=54168, avg=35279.62, stdev=4273.90 00:34:08.336 lat (usec): min=14449, max=54216, avg=35313.70, stdev=4273.93 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[46924], 99.90th=[53216], 99.95th=[53740], 00:34:08.336 | 99.99th=[54264] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=168.56, samples=20 00:34:08.336 iops : min= 352, max= 480, avg=449.60, stdev=42.14, samples=20 00:34:08.336 lat (msec) : 20=0.58%, 50=99.20%, 100=0.22% 00:34:08.336 cpu : usr=98.32%, sys=1.27%, ctx=18, majf=0, minf=32 00:34:08.336 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename1: (groupid=0, jobs=1): err= 0: pid=1297344: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=467, BW=1872KiB/s (1917kB/s)(18.3MiB/10010msec) 00:34:08.336 slat (usec): min=8, max=125, avg=37.92, stdev=20.12 00:34:08.336 clat (usec): min=12209, max=82865, avg=33867.68, stdev=5079.58 00:34:08.336 lat (usec): min=12232, max=82897, avg=33905.60, stdev=5084.07 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[19006], 5.00th=[25035], 10.00th=[31065], 20.00th=[32900], 00:34:08.336 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:34:08.336 | 70.00th=[33817], 80.00th=[33817], 90.00th=[42206], 95.00th=[43779], 00:34:08.336 | 99.00th=[44303], 99.50th=[54264], 99.90th=[68682], 99.95th=[68682], 00:34:08.336 | 99.99th=[83362] 00:34:08.336 bw ( KiB/s): min= 1402, max= 2320, per=4.33%, avg=1866.90, stdev=195.95, samples=20 00:34:08.336 iops : min= 350, max= 580, avg=466.70, stdev=49.05, samples=20 00:34:08.336 lat (msec) : 20=1.67%, 50=97.65%, 100=0.68% 00:34:08.336 cpu : usr=98.04%, sys=1.36%, ctx=69, majf=0, minf=27 00:34:08.336 IO depths : 1=3.3%, 2=8.0%, 4=20.0%, 8=59.0%, 16=9.7%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename2: (groupid=0, jobs=1): err= 0: pid=1297345: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10010msec) 00:34:08.336 slat (usec): min=9, max=101, avg=43.24, stdev=13.48 00:34:08.336 clat (usec): min=28249, max=72130, avg=35371.42, stdev=4358.68 00:34:08.336 lat (usec): min=28288, max=72168, avg=35414.66, stdev=4357.96 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[45351], 99.90th=[68682], 99.95th=[69731], 00:34:08.336 | 99.99th=[71828] 00:34:08.336 bw ( KiB/s): min= 1402, max= 1920, per=4.14%, avg=1785.30, stdev=173.75, samples=20 00:34:08.336 iops : min= 350, max= 480, avg=446.30, stdev=43.50, samples=20 00:34:08.336 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.336 cpu : usr=98.11%, sys=1.41%, ctx=54, majf=0, minf=32 00:34:08.336 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename2: (groupid=0, jobs=1): err= 0: pid=1297346: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:34:08.336 slat (usec): min=8, max=100, avg=36.34, stdev=14.35 00:34:08.336 clat (usec): min=23634, max=67594, avg=35414.93, stdev=4378.11 00:34:08.336 lat (usec): min=23650, max=67634, avg=35451.27, stdev=4377.63 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[47449], 99.90th=[67634], 99.95th=[67634], 00:34:08.336 | 99.99th=[67634] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1785.60, stdev=171.26, samples=20 00:34:08.336 iops : min= 352, max= 480, avg=446.40, stdev=42.81, samples=20 00:34:08.336 lat (msec) : 50=99.60%, 100=0.40% 00:34:08.336 cpu : usr=97.64%, sys=1.64%, ctx=69, majf=0, minf=49 00:34:08.336 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename2: (groupid=0, jobs=1): err= 0: pid=1297348: Fri Dec 6 19:32:17 2024 00:34:08.336 read: IOPS=447, BW=1792KiB/s (1835kB/s)(17.5MiB/10002msec) 00:34:08.336 slat (nsec): min=5229, max=91788, avg=39300.30, stdev=13615.75 00:34:08.336 clat (usec): min=32478, max=54890, avg=35402.92, stdev=4014.84 00:34:08.336 lat (usec): min=32502, max=54904, avg=35442.22, stdev=4013.01 00:34:08.336 clat percentiles (usec): 00:34:08.336 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:08.336 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.336 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.336 | 99.00th=[45351], 99.50th=[45351], 99.90th=[54789], 99.95th=[54789], 00:34:08.336 | 99.99th=[54789] 00:34:08.336 bw ( KiB/s): min= 1408, max= 1920, per=4.19%, avg=1805.63, stdev=164.52, samples=19 00:34:08.336 iops : min= 352, max= 480, avg=451.37, stdev=41.17, samples=19 00:34:08.336 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.336 cpu : usr=98.47%, sys=1.12%, ctx=16, majf=0, minf=33 00:34:08.336 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.336 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.336 filename2: (groupid=0, jobs=1): err= 0: pid=1297349: Fri Dec 6 19:32:17 2024 00:34:08.337 read: IOPS=450, BW=1802KiB/s (1846kB/s)(17.6MiB/10013msec) 00:34:08.337 slat (nsec): min=8018, max=84994, avg=34881.86, stdev=8773.53 00:34:08.337 clat (usec): min=15526, max=47358, avg=35191.42, stdev=4118.11 00:34:08.337 lat (usec): min=15581, max=47417, avg=35226.30, stdev=4118.79 00:34:08.337 clat percentiles (usec): 00:34:08.337 | 1.00th=[24249], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:34:08.337 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:08.337 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.337 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46924], 99.95th=[47449], 00:34:08.337 | 99.99th=[47449] 00:34:08.337 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=183.27, samples=20 00:34:08.337 iops : min= 352, max= 480, avg=449.60, stdev=45.82, samples=20 00:34:08.337 lat (msec) : 20=0.35%, 50=99.65% 00:34:08.337 cpu : usr=98.09%, sys=1.38%, ctx=81, majf=0, minf=43 00:34:08.337 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.337 filename2: (groupid=0, jobs=1): err= 0: pid=1297350: Fri Dec 6 19:32:17 2024 00:34:08.337 read: IOPS=449, BW=1799KiB/s (1843kB/s)(17.6MiB/10030msec) 00:34:08.337 slat (nsec): min=7608, max=84284, avg=31436.21, stdev=13701.88 00:34:08.337 clat (usec): min=14602, max=51454, avg=35309.88, stdev=4155.16 00:34:08.337 lat (usec): min=14641, max=51492, avg=35341.32, stdev=4155.08 00:34:08.337 clat percentiles (usec): 00:34:08.337 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:08.337 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.337 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.337 | 99.00th=[45351], 99.50th=[45876], 99.90th=[50594], 99.95th=[51119], 00:34:08.337 | 99.99th=[51643] 00:34:08.337 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1798.40, stdev=168.56, samples=20 00:34:08.337 iops : min= 352, max= 480, avg=449.60, stdev=42.14, samples=20 00:34:08.337 lat (msec) : 20=0.49%, 50=99.38%, 100=0.13% 00:34:08.337 cpu : usr=98.40%, sys=1.19%, ctx=17, majf=0, minf=48 00:34:08.337 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.337 filename2: (groupid=0, jobs=1): err= 0: pid=1297351: Fri Dec 6 19:32:17 2024 00:34:08.337 read: IOPS=451, BW=1805KiB/s (1848kB/s)(17.7MiB/10034msec) 00:34:08.337 slat (usec): min=6, max=116, avg=27.32, stdev=24.28 00:34:08.337 clat (usec): min=13654, max=45604, avg=35215.15, stdev=4165.80 00:34:08.337 lat (usec): min=13668, max=45622, avg=35242.47, stdev=4160.96 00:34:08.337 clat percentiles (usec): 00:34:08.337 | 1.00th=[23987], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:34:08.337 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.337 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.337 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:34:08.337 | 99.99th=[45351] 00:34:08.337 bw ( KiB/s): min= 1408, max= 1920, per=4.19%, avg=1804.80, stdev=160.30, samples=20 00:34:08.337 iops : min= 352, max= 480, avg=451.20, stdev=40.08, samples=20 00:34:08.337 lat (msec) : 20=0.35%, 50=99.65% 00:34:08.337 cpu : usr=98.07%, sys=1.45%, ctx=22, majf=0, minf=82 00:34:08.337 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.337 filename2: (groupid=0, jobs=1): err= 0: pid=1297352: Fri Dec 6 19:32:17 2024 00:34:08.337 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10027msec) 00:34:08.337 slat (usec): min=4, max=105, avg=25.96, stdev=21.07 00:34:08.337 clat (usec): min=21046, max=55083, avg=35431.27, stdev=4102.39 00:34:08.337 lat (usec): min=21057, max=55099, avg=35457.23, stdev=4097.28 00:34:08.337 clat percentiles (usec): 00:34:08.337 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:34:08.337 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.337 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.337 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:34:08.337 | 99.99th=[55313] 00:34:08.337 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1792.00, stdev=166.11, samples=20 00:34:08.337 iops : min= 352, max= 480, avg=448.00, stdev=41.53, samples=20 00:34:08.337 lat (msec) : 50=99.96%, 100=0.04% 00:34:08.337 cpu : usr=97.27%, sys=1.76%, ctx=182, majf=0, minf=51 00:34:08.337 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:08.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.337 filename2: (groupid=0, jobs=1): err= 0: pid=1297353: Fri Dec 6 19:32:17 2024 00:34:08.337 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:34:08.337 slat (usec): min=12, max=125, avg=48.45, stdev=18.42 00:34:08.337 clat (usec): min=28377, max=67772, avg=35327.84, stdev=4334.64 00:34:08.337 lat (usec): min=28405, max=67808, avg=35376.29, stdev=4331.31 00:34:08.337 clat percentiles (usec): 00:34:08.337 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:34:08.337 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:08.337 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:34:08.337 | 99.00th=[45351], 99.50th=[45351], 99.90th=[67634], 99.95th=[67634], 00:34:08.337 | 99.99th=[67634] 00:34:08.337 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1785.60, stdev=173.60, samples=20 00:34:08.337 iops : min= 352, max= 480, avg=446.40, stdev=43.40, samples=20 00:34:08.337 lat (msec) : 50=99.64%, 100=0.36% 00:34:08.337 cpu : usr=96.33%, sys=2.17%, ctx=336, majf=0, minf=45 00:34:08.337 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:08.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.337 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.337 00:34:08.337 Run status group 0 (all jobs): 00:34:08.337 READ: bw=42.1MiB/s (44.1MB/s), 1790KiB/s-1872KiB/s (1833kB/s-1917kB/s), io=422MiB (443MB), run=10002-10034msec 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:08.337 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 bdev_null0 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 [2024-12-06 19:32:18.056016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 bdev_null1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.338 { 00:34:08.338 "params": { 00:34:08.338 "name": "Nvme$subsystem", 00:34:08.338 "trtype": "$TEST_TRANSPORT", 00:34:08.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.338 "adrfam": "ipv4", 00:34:08.338 "trsvcid": "$NVMF_PORT", 00:34:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.338 "hdgst": ${hdgst:-false}, 00:34:08.338 "ddgst": ${ddgst:-false} 00:34:08.338 }, 00:34:08.338 "method": "bdev_nvme_attach_controller" 00:34:08.338 } 00:34:08.338 EOF 00:34:08.338 )") 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.338 { 00:34:08.338 "params": { 00:34:08.338 "name": "Nvme$subsystem", 00:34:08.338 "trtype": "$TEST_TRANSPORT", 00:34:08.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.338 "adrfam": "ipv4", 00:34:08.338 "trsvcid": "$NVMF_PORT", 00:34:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.338 "hdgst": ${hdgst:-false}, 00:34:08.338 "ddgst": ${ddgst:-false} 00:34:08.338 }, 00:34:08.338 "method": "bdev_nvme_attach_controller" 00:34:08.338 } 00:34:08.338 EOF 00:34:08.338 )") 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.338 "params": { 00:34:08.338 "name": "Nvme0", 00:34:08.338 "trtype": "tcp", 00:34:08.338 "traddr": "10.0.0.2", 00:34:08.338 "adrfam": "ipv4", 00:34:08.338 "trsvcid": "4420", 00:34:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.338 "hdgst": false, 00:34:08.338 "ddgst": false 00:34:08.338 }, 00:34:08.338 "method": "bdev_nvme_attach_controller" 00:34:08.338 },{ 00:34:08.338 "params": { 00:34:08.338 "name": "Nvme1", 00:34:08.338 "trtype": "tcp", 00:34:08.338 "traddr": "10.0.0.2", 00:34:08.338 "adrfam": "ipv4", 00:34:08.338 "trsvcid": "4420", 00:34:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.338 "hdgst": false, 00:34:08.338 "ddgst": false 00:34:08.338 }, 00:34:08.338 "method": "bdev_nvme_attach_controller" 00:34:08.338 }' 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.338 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.338 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:08.338 ... 00:34:08.338 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:08.338 ... 00:34:08.338 fio-3.35 00:34:08.338 Starting 4 threads 00:34:14.896 00:34:14.896 filename0: (groupid=0, jobs=1): err= 0: pid=1298721: Fri Dec 6 19:32:24 2024 00:34:14.896 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(70.8MiB/5007msec) 00:34:14.896 slat (nsec): min=7126, max=91130, avg=18840.85, stdev=10540.53 00:34:14.896 clat (usec): min=903, max=14322, avg=4351.82, stdev=712.98 00:34:14.896 lat (usec): min=921, max=14357, avg=4370.66, stdev=712.65 00:34:14.896 clat percentiles (usec): 00:34:14.896 | 1.00th=[ 2573], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4080], 00:34:14.896 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:34:14.896 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5538], 00:34:14.896 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 8160], 99.95th=[14353], 00:34:14.896 | 99.99th=[14353] 00:34:14.896 bw ( KiB/s): min=13963, max=14832, per=24.66%, avg=14492.30, stdev=265.82, samples=10 00:34:14.896 iops : min= 1745, max= 1854, avg=1811.50, stdev=33.31, samples=10 00:34:14.896 lat (usec) : 1000=0.03% 00:34:14.896 lat (msec) : 2=0.58%, 4=16.04%, 10=83.25%, 20=0.09% 00:34:14.896 cpu : usr=95.33%, sys=4.17%, ctx=8, majf=0, minf=9 00:34:14.896 IO depths : 1=0.3%, 2=14.4%, 4=58.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.896 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.896 issued rwts: total=9064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.896 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:14.896 filename0: (groupid=0, jobs=1): err= 0: pid=1298723: Fri Dec 6 19:32:24 2024 00:34:14.896 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5003msec) 00:34:14.896 slat (usec): min=3, max=108, avg=22.54, stdev=10.18 00:34:14.896 clat (usec): min=759, max=7947, avg=4154.12, stdev=592.84 00:34:14.896 lat (usec): min=779, max=7968, avg=4176.65, stdev=593.97 00:34:14.896 clat percentiles (usec): 00:34:14.897 | 1.00th=[ 1827], 5.00th=[ 3228], 10.00th=[ 3556], 20.00th=[ 3851], 00:34:14.897 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:14.897 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5014], 00:34:14.897 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 7177], 00:34:14.897 | 99.99th=[ 7963] 00:34:14.897 bw ( KiB/s): min=14464, max=16000, per=25.71%, avg=15113.60, stdev=409.40, samples=10 00:34:14.897 iops : min= 1808, max= 2000, avg=1889.20, stdev=51.17, samples=10 00:34:14.897 lat (usec) : 1000=0.01% 00:34:14.897 lat (msec) : 2=1.21%, 4=25.28%, 10=73.50% 00:34:14.897 cpu : usr=91.16%, sys=5.76%, ctx=159, majf=0, minf=0 00:34:14.897 IO depths : 1=0.7%, 2=18.1%, 4=55.4%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 issued rwts: total=9447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:14.897 filename1: (groupid=0, jobs=1): err= 0: pid=1298724: Fri Dec 6 19:32:24 2024 00:34:14.897 read: IOPS=1807, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5006msec) 00:34:14.897 slat (nsec): min=7071, max=88438, avg=20165.10, stdev=10891.76 00:34:14.897 clat (usec): min=950, max=14714, avg=4351.63, stdev=725.90 00:34:14.897 lat (usec): min=968, max=14726, avg=4371.79, stdev=725.31 00:34:14.897 clat percentiles (usec): 00:34:14.897 | 1.00th=[ 2474], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4047], 00:34:14.897 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:34:14.897 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5473], 00:34:14.897 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7767], 99.95th=[14746], 00:34:14.897 | 99.99th=[14746] 00:34:14.897 bw ( KiB/s): min=14112, max=15024, per=24.61%, avg=14467.20, stdev=305.34, samples=10 00:34:14.897 iops : min= 1764, max= 1878, avg=1808.40, stdev=38.17, samples=10 00:34:14.897 lat (usec) : 1000=0.03% 00:34:14.897 lat (msec) : 2=0.59%, 4=15.69%, 10=83.60%, 20=0.09% 00:34:14.897 cpu : usr=96.44%, sys=3.08%, ctx=7, majf=0, minf=9 00:34:14.897 IO depths : 1=0.2%, 2=15.7%, 4=57.1%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 issued rwts: total=9050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:14.897 filename1: (groupid=0, jobs=1): err= 0: pid=1298725: Fri Dec 6 19:32:24 2024 00:34:14.897 read: IOPS=1843, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5008msec) 00:34:14.897 slat (nsec): min=7125, max=81259, avg=16627.78, stdev=9930.84 00:34:14.897 clat (usec): min=609, max=13665, avg=4283.73, stdev=668.67 00:34:14.897 lat (usec): min=617, max=13680, avg=4300.35, stdev=668.75 00:34:14.897 clat percentiles (usec): 00:34:14.897 | 1.00th=[ 2573], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3949], 00:34:14.897 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:34:14.897 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5211], 00:34:14.897 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[13566], 00:34:14.897 | 99.99th=[13698] 00:34:14.897 bw ( KiB/s): min=14432, max=15200, per=25.11%, avg=14758.40, stdev=235.18, samples=10 00:34:14.897 iops : min= 1804, max= 1900, avg=1844.80, stdev=29.40, samples=10 00:34:14.897 lat (usec) : 750=0.06% 00:34:14.897 lat (msec) : 2=0.41%, 4=20.96%, 10=78.51%, 20=0.05% 00:34:14.897 cpu : usr=96.17%, sys=3.36%, ctx=6, majf=0, minf=9 00:34:14.897 IO depths : 1=0.3%, 2=12.1%, 4=59.1%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.897 issued rwts: total=9232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:14.897 00:34:14.897 Run status group 0 (all jobs): 00:34:14.897 READ: bw=57.4MiB/s (60.2MB/s), 14.1MiB/s-14.8MiB/s (14.8MB/s-15.5MB/s), io=287MiB (301MB), run=5003-5008msec 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 00:34:14.897 real 0m24.767s 00:34:14.897 user 4m33.170s 00:34:14.897 sys 0m6.297s 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 ************************************ 00:34:14.897 END TEST fio_dif_rand_params 00:34:14.897 ************************************ 00:34:14.897 19:32:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:14.897 19:32:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.897 19:32:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 ************************************ 00:34:14.897 START TEST fio_dif_digest 00:34:14.897 ************************************ 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 bdev_null0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.897 [2024-12-06 19:32:24.689149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.897 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.897 { 00:34:14.897 "params": { 00:34:14.897 "name": "Nvme$subsystem", 00:34:14.897 "trtype": "$TEST_TRANSPORT", 00:34:14.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.898 "adrfam": "ipv4", 00:34:14.898 "trsvcid": "$NVMF_PORT", 00:34:14.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.898 "hdgst": ${hdgst:-false}, 00:34:14.898 "ddgst": ${ddgst:-false} 00:34:14.898 }, 00:34:14.898 "method": "bdev_nvme_attach_controller" 00:34:14.898 } 00:34:14.898 EOF 00:34:14.898 )") 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.898 "params": { 00:34:14.898 "name": "Nvme0", 00:34:14.898 "trtype": "tcp", 00:34:14.898 "traddr": "10.0.0.2", 00:34:14.898 "adrfam": "ipv4", 00:34:14.898 "trsvcid": "4420", 00:34:14.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:14.898 "hdgst": true, 00:34:14.898 "ddgst": true 00:34:14.898 }, 00:34:14.898 "method": "bdev_nvme_attach_controller" 00:34:14.898 }' 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:14.898 19:32:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.898 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:14.898 ... 00:34:14.898 fio-3.35 00:34:14.898 Starting 3 threads 00:34:27.094 00:34:27.094 filename0: (groupid=0, jobs=1): err= 0: pid=1299539: Fri Dec 6 19:32:35 2024 00:34:27.094 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(244MiB/10050msec) 00:34:27.094 slat (nsec): min=4323, max=83957, avg=17936.25, stdev=4112.64 00:34:27.094 clat (usec): min=9448, max=55358, avg=15426.00, stdev=1660.41 00:34:27.094 lat (usec): min=9465, max=55376, avg=15443.93, stdev=1660.36 00:34:27.094 clat percentiles (usec): 00:34:27.094 | 1.00th=[12911], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:34:27.094 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:34:27.094 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:34:27.094 | 99.00th=[18220], 99.50th=[18744], 99.90th=[52691], 99.95th=[55313], 00:34:27.094 | 99.99th=[55313] 00:34:27.094 bw ( KiB/s): min=23808, max=25856, per=33.33%, avg=24908.80, stdev=424.32, samples=20 00:34:27.094 iops : min= 186, max= 202, avg=194.60, stdev= 3.32, samples=20 00:34:27.094 lat (msec) : 10=0.15%, 20=99.59%, 50=0.15%, 100=0.10% 00:34:27.094 cpu : usr=94.99%, sys=4.52%, ctx=20, majf=0, minf=256 00:34:27.095 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:27.095 filename0: (groupid=0, jobs=1): err= 0: pid=1299540: Fri Dec 6 19:32:35 2024 00:34:27.095 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10048msec) 00:34:27.095 slat (nsec): min=4329, max=54968, avg=17553.33, stdev=4428.08 00:34:27.095 clat (usec): min=11600, max=56669, avg=15134.72, stdev=2190.71 00:34:27.095 lat (usec): min=11620, max=56684, avg=15152.27, stdev=2190.78 00:34:27.095 clat percentiles (usec): 00:34:27.095 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:34:27.095 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:34:27.095 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:34:27.095 | 99.00th=[17695], 99.50th=[18220], 99.90th=[54789], 99.95th=[56886], 00:34:27.095 | 99.99th=[56886] 00:34:27.095 bw ( KiB/s): min=23296, max=26112, per=33.96%, avg=25382.40, stdev=611.90, samples=20 00:34:27.095 iops : min= 182, max= 204, avg=198.30, stdev= 4.78, samples=20 00:34:27.095 lat (msec) : 20=99.75%, 50=0.05%, 100=0.20% 00:34:27.095 cpu : usr=95.55%, sys=3.93%, ctx=13, majf=0, minf=187 00:34:27.095 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:27.095 filename0: (groupid=0, jobs=1): err= 0: pid=1299541: Fri Dec 6 19:32:35 2024 00:34:27.095 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(242MiB/10007msec) 00:34:27.095 slat (nsec): min=4602, max=63147, avg=18716.42, stdev=5425.97 00:34:27.095 clat (usec): min=9167, max=23383, avg=15506.45, stdev=1076.84 00:34:27.095 lat (usec): min=9188, max=23409, avg=15525.16, stdev=1076.76 00:34:27.095 clat percentiles (usec): 00:34:27.095 | 1.00th=[12780], 5.00th=[13960], 10.00th=[14353], 20.00th=[14746], 00:34:27.095 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:34:27.095 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:34:27.095 | 99.00th=[17957], 99.50th=[18220], 99.90th=[23462], 99.95th=[23462], 00:34:27.095 | 99.99th=[23462] 00:34:27.095 bw ( KiB/s): min=23808, max=25856, per=33.05%, avg=24704.00, stdev=541.47, samples=20 00:34:27.095 iops : min= 186, max= 202, avg=193.00, stdev= 4.23, samples=20 00:34:27.095 lat (msec) : 10=0.21%, 20=99.64%, 50=0.16% 00:34:27.095 cpu : usr=91.68%, sys=5.84%, ctx=463, majf=0, minf=163 00:34:27.095 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.095 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:27.095 00:34:27.095 Run status group 0 (all jobs): 00:34:27.095 READ: bw=73.0MiB/s (76.5MB/s), 24.1MiB/s-24.7MiB/s (25.3MB/s-25.9MB/s), io=734MiB (769MB), run=10007-10050msec 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.095 00:34:27.095 real 0m11.095s 00:34:27.095 user 0m29.529s 00:34:27.095 sys 0m1.698s 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.095 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.095 ************************************ 00:34:27.095 END TEST fio_dif_digest 00:34:27.095 ************************************ 00:34:27.095 19:32:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:27.095 19:32:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.095 rmmod nvme_tcp 00:34:27.095 rmmod nvme_fabrics 00:34:27.095 rmmod nvme_keyring 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1293377 ']' 00:34:27.095 19:32:35 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1293377 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1293377 ']' 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1293377 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1293377 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1293377' 00:34:27.095 killing process with pid 1293377 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1293377 00:34:27.095 19:32:35 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1293377 00:34:27.095 19:32:36 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:27.095 19:32:36 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.095 Waiting for block devices as requested 00:34:27.095 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:27.095 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.095 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:27.095 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:27.353 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:27.353 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:27.353 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:27.353 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:27.612 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:27.612 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.612 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:27.612 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:27.870 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:27.870 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:27.870 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.128 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.128 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.128 19:32:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.128 19:32:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.128 19:32:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:28.128 19:32:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:28.128 19:32:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.129 19:32:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.129 19:32:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.129 19:32:38 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.129 19:32:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.129 19:32:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:28.129 19:32:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.731 19:32:40 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.731 00:34:30.731 real 1m7.552s 00:34:30.731 user 6m30.855s 00:34:30.731 sys 0m17.385s 00:34:30.731 19:32:40 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.731 19:32:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:30.731 ************************************ 00:34:30.731 END TEST nvmf_dif 00:34:30.731 ************************************ 00:34:30.731 19:32:40 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:30.731 19:32:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.731 19:32:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.731 19:32:40 -- common/autotest_common.sh@10 -- # set +x 00:34:30.731 ************************************ 00:34:30.731 START TEST nvmf_abort_qd_sizes 00:34:30.731 ************************************ 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:30.731 * Looking for test storage... 00:34:30.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:30.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.731 --rc genhtml_branch_coverage=1 00:34:30.731 --rc genhtml_function_coverage=1 00:34:30.731 --rc genhtml_legend=1 00:34:30.731 --rc geninfo_all_blocks=1 00:34:30.731 --rc geninfo_unexecuted_blocks=1 00:34:30.731 00:34:30.731 ' 00:34:30.731 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:30.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.731 --rc genhtml_branch_coverage=1 00:34:30.731 --rc genhtml_function_coverage=1 00:34:30.731 --rc genhtml_legend=1 00:34:30.731 --rc geninfo_all_blocks=1 00:34:30.731 --rc geninfo_unexecuted_blocks=1 00:34:30.731 00:34:30.732 ' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:30.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.732 --rc genhtml_branch_coverage=1 00:34:30.732 --rc genhtml_function_coverage=1 00:34:30.732 --rc genhtml_legend=1 00:34:30.732 --rc geninfo_all_blocks=1 00:34:30.732 --rc geninfo_unexecuted_blocks=1 00:34:30.732 00:34:30.732 ' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:30.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.732 --rc genhtml_branch_coverage=1 00:34:30.732 --rc genhtml_function_coverage=1 00:34:30.732 --rc genhtml_legend=1 00:34:30.732 --rc geninfo_all_blocks=1 00:34:30.732 --rc geninfo_unexecuted_blocks=1 00:34:30.732 00:34:30.732 ' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.732 19:32:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:32.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:32.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:32.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:32.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.636 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:34:32.636 00:34:32.636 --- 10.0.0.2 ping statistics --- 00:34:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.636 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:32.637 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:34:32.637 00:34:32.637 --- 10.0.0.1 ping statistics --- 00:34:32.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.637 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:32.637 19:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.637 19:32:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:32.637 19:32:43 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:32.637 19:32:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:33.573 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.573 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.573 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.573 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.573 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.833 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.833 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.833 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.833 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:34.772 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1304412 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1304412 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1304412 ']' 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.772 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.028 [2024-12-06 19:32:45.385938] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:34:35.028 [2024-12-06 19:32:45.386040] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.028 [2024-12-06 19:32:45.456584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.029 [2024-12-06 19:32:45.513340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.029 [2024-12-06 19:32:45.513399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.029 [2024-12-06 19:32:45.513413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.029 [2024-12-06 19:32:45.513423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.029 [2024-12-06 19:32:45.513432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.029 [2024-12-06 19:32:45.514812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.029 [2024-12-06 19:32:45.514872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.029 [2024-12-06 19:32:45.514939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.029 [2024-12-06 19:32:45.514942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.285 19:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.285 ************************************ 00:34:35.285 START TEST spdk_target_abort 00:34:35.285 ************************************ 00:34:35.285 19:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:35.285 19:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:35.285 19:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:34:35.285 19:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.285 19:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 spdk_targetn1 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 [2024-12-06 19:32:48.520228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 [2024-12-06 19:32:48.568562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:38.563 19:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:41.841 Initializing NVMe Controllers 00:34:41.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:41.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:41.841 Initialization complete. Launching workers. 00:34:41.841 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11996, failed: 0 00:34:41.841 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1293, failed to submit 10703 00:34:41.841 success 722, unsuccessful 571, failed 0 00:34:41.842 19:32:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:41.842 19:32:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.116 Initializing NVMe Controllers 00:34:45.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.116 Initialization complete. Launching workers. 00:34:45.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8575, failed: 0 00:34:45.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1256, failed to submit 7319 00:34:45.116 success 307, unsuccessful 949, failed 0 00:34:45.116 19:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.116 19:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.392 Initializing NVMe Controllers 00:34:48.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.392 Initialization complete. Launching workers. 00:34:48.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30622, failed: 0 00:34:48.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2749, failed to submit 27873 00:34:48.392 success 557, unsuccessful 2192, failed 0 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.392 19:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1304412 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1304412 ']' 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1304412 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304412 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304412' 00:34:49.323 killing process with pid 1304412 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1304412 00:34:49.323 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1304412 00:34:49.581 00:34:49.581 real 0m14.284s 00:34:49.581 user 0m53.852s 00:34:49.581 sys 0m2.772s 00:34:49.581 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.581 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.581 ************************************ 00:34:49.581 END TEST spdk_target_abort 00:34:49.581 ************************************ 00:34:49.581 19:32:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:49.581 19:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.581 19:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.581 19:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:49.581 ************************************ 00:34:49.581 START TEST kernel_target_abort 00:34:49.581 ************************************ 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:49.581 19:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:50.954 Waiting for block devices as requested 00:34:50.954 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:50.954 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:50.954 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:51.211 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:51.211 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:51.211 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.211 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.470 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.470 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:51.470 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:51.470 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:51.728 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:51.728 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:51.728 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.728 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.987 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.987 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:51.987 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:51.987 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:51.987 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:51.988 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:52.247 No valid GPT data, bailing 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:52.247 00:34:52.247 Discovery Log Number of Records 2, Generation counter 2 00:34:52.247 =====Discovery Log Entry 0====== 00:34:52.247 trtype: tcp 00:34:52.247 adrfam: ipv4 00:34:52.247 subtype: current discovery subsystem 00:34:52.247 treq: not specified, sq flow control disable supported 00:34:52.247 portid: 1 00:34:52.247 trsvcid: 4420 00:34:52.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:52.247 traddr: 10.0.0.1 00:34:52.247 eflags: none 00:34:52.247 sectype: none 00:34:52.247 =====Discovery Log Entry 1====== 00:34:52.247 trtype: tcp 00:34:52.247 adrfam: ipv4 00:34:52.247 subtype: nvme subsystem 00:34:52.247 treq: not specified, sq flow control disable supported 00:34:52.247 portid: 1 00:34:52.247 trsvcid: 4420 00:34:52.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:52.247 traddr: 10.0.0.1 00:34:52.247 eflags: none 00:34:52.247 sectype: none 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.247 19:33:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.530 Initializing NVMe Controllers 00:34:55.530 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.530 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.530 Initialization complete. Launching workers. 00:34:55.530 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56372, failed: 0 00:34:55.530 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56372, failed to submit 0 00:34:55.530 success 0, unsuccessful 56372, failed 0 00:34:55.530 19:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.530 19:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:58.812 Initializing NVMe Controllers 00:34:58.812 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:58.812 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:58.812 Initialization complete. Launching workers. 00:34:58.812 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99932, failed: 0 00:34:58.812 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25174, failed to submit 74758 00:34:58.812 success 0, unsuccessful 25174, failed 0 00:34:58.812 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:58.812 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.096 Initializing NVMe Controllers 00:35:02.096 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.096 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.096 Initialization complete. Launching workers. 00:35:02.096 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96267, failed: 0 00:35:02.096 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24074, failed to submit 72193 00:35:02.096 success 0, unsuccessful 24074, failed 0 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:02.096 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:03.036 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:03.036 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:03.036 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:03.037 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:03.037 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:03.037 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:03.037 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:03.037 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:03.037 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:03.976 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:03.976 00:35:03.976 real 0m14.408s 00:35:03.976 user 0m6.681s 00:35:03.976 sys 0m3.208s 00:35:03.976 19:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.976 19:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:03.976 ************************************ 00:35:03.976 END TEST kernel_target_abort 00:35:03.976 ************************************ 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.976 rmmod nvme_tcp 00:35:03.976 rmmod nvme_fabrics 00:35:03.976 rmmod nvme_keyring 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1304412 ']' 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1304412 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1304412 ']' 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1304412 00:35:03.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1304412) - No such process 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1304412 is not found' 00:35:03.976 Process with pid 1304412 is not found 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:03.976 19:33:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:05.348 Waiting for block devices as requested 00:35:05.348 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:05.348 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:05.348 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:05.676 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:05.676 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:05.676 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:05.676 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:05.953 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:05.953 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:05.953 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:05.953 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:05.953 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:06.271 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:06.271 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:06.271 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:06.271 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:06.529 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.529 19:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.060 19:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.060 00:35:09.060 real 0m38.285s 00:35:09.060 user 1m2.687s 00:35:09.060 sys 0m9.473s 00:35:09.060 19:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.060 19:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:09.060 ************************************ 00:35:09.060 END TEST nvmf_abort_qd_sizes 00:35:09.060 ************************************ 00:35:09.060 19:33:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:09.060 19:33:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:09.060 19:33:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.060 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:35:09.060 ************************************ 00:35:09.060 START TEST keyring_file 00:35:09.060 ************************************ 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:09.060 * Looking for test storage... 00:35:09.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.060 19:33:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:09.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.060 --rc genhtml_branch_coverage=1 00:35:09.060 --rc genhtml_function_coverage=1 00:35:09.060 --rc genhtml_legend=1 00:35:09.060 --rc geninfo_all_blocks=1 00:35:09.060 --rc geninfo_unexecuted_blocks=1 00:35:09.060 00:35:09.060 ' 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:09.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.060 --rc genhtml_branch_coverage=1 00:35:09.060 --rc genhtml_function_coverage=1 00:35:09.060 --rc genhtml_legend=1 00:35:09.060 --rc geninfo_all_blocks=1 00:35:09.060 --rc geninfo_unexecuted_blocks=1 00:35:09.060 00:35:09.060 ' 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:09.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.060 --rc genhtml_branch_coverage=1 00:35:09.060 --rc genhtml_function_coverage=1 00:35:09.060 --rc genhtml_legend=1 00:35:09.060 --rc geninfo_all_blocks=1 00:35:09.060 --rc geninfo_unexecuted_blocks=1 00:35:09.060 00:35:09.060 ' 00:35:09.060 19:33:19 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:09.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.060 --rc genhtml_branch_coverage=1 00:35:09.060 --rc genhtml_function_coverage=1 00:35:09.060 --rc genhtml_legend=1 00:35:09.060 --rc geninfo_all_blocks=1 00:35:09.060 --rc geninfo_unexecuted_blocks=1 00:35:09.060 00:35:09.060 ' 00:35:09.060 19:33:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:09.060 19:33:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.060 19:33:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.061 19:33:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.061 19:33:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.061 19:33:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.061 19:33:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.061 19:33:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.061 19:33:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.061 19:33:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.061 19:33:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:09.061 19:33:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:09.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wemvoQExQL 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wemvoQExQL 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wemvoQExQL 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wemvoQExQL 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xYMj6KfWdm 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.061 19:33:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xYMj6KfWdm 00:35:09.061 19:33:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xYMj6KfWdm 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xYMj6KfWdm 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=1310855 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:09.061 19:33:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1310855 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1310855 ']' 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.061 19:33:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.061 [2024-12-06 19:33:19.386009] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:35:09.061 [2024-12-06 19:33:19.386099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310855 ] 00:35:09.061 [2024-12-06 19:33:19.451057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.061 [2024-12-06 19:33:19.511562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:09.320 19:33:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.320 [2024-12-06 19:33:19.786416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.320 null0 00:35:09.320 [2024-12-06 19:33:19.818469] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:09.320 [2024-12-06 19:33:19.818995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.320 19:33:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.320 [2024-12-06 19:33:19.842518] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:09.320 request: 00:35:09.320 { 00:35:09.320 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.320 "secure_channel": false, 00:35:09.320 "listen_address": { 00:35:09.320 "trtype": "tcp", 00:35:09.320 "traddr": "127.0.0.1", 00:35:09.320 "trsvcid": "4420" 00:35:09.320 }, 00:35:09.320 "method": "nvmf_subsystem_add_listener", 00:35:09.320 "req_id": 1 00:35:09.320 } 00:35:09.320 Got JSON-RPC error response 00:35:09.320 response: 00:35:09.320 { 00:35:09.320 "code": -32602, 00:35:09.320 "message": "Invalid parameters" 00:35:09.320 } 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.320 19:33:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=1310859 00:35:09.320 19:33:19 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:09.320 19:33:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1310859 /var/tmp/bperf.sock 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1310859 ']' 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.320 19:33:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.320 [2024-12-06 19:33:19.889873] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:35:09.320 [2024-12-06 19:33:19.889938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310859 ] 00:35:09.578 [2024-12-06 19:33:19.953881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.578 [2024-12-06 19:33:20.013343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.578 19:33:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.578 19:33:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:09.578 19:33:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:09.578 19:33:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:10.143 19:33:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xYMj6KfWdm 00:35:10.143 19:33:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xYMj6KfWdm 00:35:10.143 19:33:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:10.143 19:33:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:10.143 19:33:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.143 19:33:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.143 19:33:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.401 19:33:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wemvoQExQL == \/\t\m\p\/\t\m\p\.\w\e\m\v\o\Q\E\x\Q\L ]] 00:35:10.401 19:33:20 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:10.401 19:33:20 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:10.401 19:33:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.401 19:33:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.401 19:33:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.967 19:33:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.xYMj6KfWdm == \/\t\m\p\/\t\m\p\.\x\Y\M\j\6\K\f\W\d\m ]] 00:35:10.967 19:33:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.967 19:33:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:10.967 19:33:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.967 19:33:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.226 19:33:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:11.226 19:33:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.226 19:33:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.483 [2024-12-06 19:33:22.030120] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:11.741 nvme0n1 00:35:11.741 19:33:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:11.741 19:33:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.741 19:33:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.741 19:33:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.741 19:33:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.741 19:33:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.999 19:33:22 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:11.999 19:33:22 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:11.999 19:33:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:11.999 19:33:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.999 19:33:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.999 19:33:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.000 19:33:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.257 19:33:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:12.257 19:33:22 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.257 Running I/O for 1 seconds... 00:35:13.631 9686.00 IOPS, 37.84 MiB/s 00:35:13.631 Latency(us) 00:35:13.631 [2024-12-06T18:33:24.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.631 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:13.631 nvme0n1 : 1.13 8674.05 33.88 0.00 0.00 14675.19 6407.96 208161.75 00:35:13.631 [2024-12-06T18:33:24.208Z] =================================================================================================================== 00:35:13.631 [2024-12-06T18:33:24.208Z] Total : 8674.05 33.88 0.00 0.00 14675.19 6407.96 208161.75 00:35:13.631 { 00:35:13.631 "results": [ 00:35:13.631 { 00:35:13.631 "job": "nvme0n1", 00:35:13.631 "core_mask": "0x2", 00:35:13.631 "workload": "randrw", 00:35:13.631 "percentage": 50, 00:35:13.631 "status": "finished", 00:35:13.631 "queue_depth": 128, 00:35:13.631 "io_size": 4096, 00:35:13.631 "runtime": 1.131421, 00:35:13.631 "iops": 8674.047945017814, 00:35:13.631 "mibps": 33.882999785225834, 00:35:13.631 "io_failed": 0, 00:35:13.631 "io_timeout": 0, 00:35:13.631 "avg_latency_us": 14675.18623281933, 00:35:13.631 "min_latency_us": 6407.964444444445, 00:35:13.631 "max_latency_us": 208161.75407407407 00:35:13.631 } 00:35:13.631 ], 00:35:13.631 "core_count": 1 00:35:13.631 } 00:35:13.631 19:33:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.631 19:33:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.631 19:33:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:13.631 19:33:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.631 19:33:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.631 19:33:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.631 19:33:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.631 19:33:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.888 19:33:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:13.889 19:33:24 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:13.889 19:33:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:13.889 19:33:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.889 19:33:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.889 19:33:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.889 19:33:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:14.454 19:33:24 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:14.454 19:33:24 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.454 19:33:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.454 19:33:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.454 [2024-12-06 19:33:25.015845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:14.454 [2024-12-06 19:33:25.016363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1170 (107): Transport endpoint is not connected 00:35:14.454 [2024-12-06 19:33:25.017354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1170 (9): Bad file descriptor 00:35:14.454 [2024-12-06 19:33:25.018354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:14.454 [2024-12-06 19:33:25.018372] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:14.454 [2024-12-06 19:33:25.018394] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:14.454 [2024-12-06 19:33:25.018408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:14.454 request: 00:35:14.454 { 00:35:14.454 "name": "nvme0", 00:35:14.454 "trtype": "tcp", 00:35:14.454 "traddr": "127.0.0.1", 00:35:14.454 "adrfam": "ipv4", 00:35:14.454 "trsvcid": "4420", 00:35:14.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.454 "prchk_reftag": false, 00:35:14.454 "prchk_guard": false, 00:35:14.454 "hdgst": false, 00:35:14.454 "ddgst": false, 00:35:14.454 "psk": "key1", 00:35:14.454 "allow_unrecognized_csi": false, 00:35:14.454 "method": "bdev_nvme_attach_controller", 00:35:14.454 "req_id": 1 00:35:14.454 } 00:35:14.454 Got JSON-RPC error response 00:35:14.454 response: 00:35:14.454 { 00:35:14.454 "code": -5, 00:35:14.454 "message": "Input/output error" 00:35:14.454 } 00:35:14.711 19:33:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:14.711 19:33:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.711 19:33:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.711 19:33:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.711 19:33:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:14.711 19:33:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.711 19:33:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.711 19:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.711 19:33:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.711 19:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.968 19:33:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:14.968 19:33:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:14.968 19:33:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:14.968 19:33:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.968 19:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.968 19:33:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.968 19:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.225 19:33:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:15.225 19:33:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:15.225 19:33:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:15.483 19:33:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:15.483 19:33:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:15.741 19:33:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:15.741 19:33:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:15.741 19:33:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.999 19:33:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:15.999 19:33:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wemvoQExQL 00:35:15.999 19:33:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.999 19:33:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:15.999 19:33:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:16.257 [2024-12-06 19:33:26.652356] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wemvoQExQL': 0100660 00:35:16.257 [2024-12-06 19:33:26.652390] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:16.257 request: 00:35:16.257 { 00:35:16.257 "name": "key0", 00:35:16.257 "path": "/tmp/tmp.wemvoQExQL", 00:35:16.257 "method": "keyring_file_add_key", 00:35:16.257 "req_id": 1 00:35:16.257 } 00:35:16.257 Got JSON-RPC error response 00:35:16.257 response: 00:35:16.257 { 00:35:16.257 "code": -1, 00:35:16.257 "message": "Operation not permitted" 00:35:16.257 } 00:35:16.257 19:33:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:16.257 19:33:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.257 19:33:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.257 19:33:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.257 19:33:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wemvoQExQL 00:35:16.257 19:33:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:16.257 19:33:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wemvoQExQL 00:35:16.514 19:33:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wemvoQExQL 00:35:16.514 19:33:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:16.514 19:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.514 19:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.514 19:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.514 19:33:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.514 19:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.772 19:33:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:16.772 19:33:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.772 19:33:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.772 19:33:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.031 [2024-12-06 19:33:27.486655] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wemvoQExQL': No such file or directory 00:35:17.031 [2024-12-06 19:33:27.486699] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:17.031 [2024-12-06 19:33:27.486733] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:17.031 [2024-12-06 19:33:27.486747] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:17.031 [2024-12-06 19:33:27.486759] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:17.031 [2024-12-06 19:33:27.486771] bdev_nvme.c:6795:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:17.031 request: 00:35:17.031 { 00:35:17.031 "name": "nvme0", 00:35:17.031 "trtype": "tcp", 00:35:17.031 "traddr": "127.0.0.1", 00:35:17.031 "adrfam": "ipv4", 00:35:17.031 "trsvcid": "4420", 00:35:17.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.031 "prchk_reftag": false, 00:35:17.031 "prchk_guard": false, 00:35:17.031 "hdgst": false, 00:35:17.031 "ddgst": false, 00:35:17.031 "psk": "key0", 00:35:17.031 "allow_unrecognized_csi": false, 00:35:17.031 "method": "bdev_nvme_attach_controller", 00:35:17.031 "req_id": 1 00:35:17.031 } 00:35:17.031 Got JSON-RPC error response 00:35:17.031 response: 00:35:17.031 { 00:35:17.031 "code": -19, 00:35:17.031 "message": "No such device" 00:35:17.031 } 00:35:17.031 19:33:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:17.031 19:33:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.031 19:33:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.031 19:33:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.031 19:33:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:17.031 19:33:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:17.289 19:33:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PRTXj9By9f 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:17.289 19:33:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PRTXj9By9f 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PRTXj9By9f 00:35:17.289 19:33:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.PRTXj9By9f 00:35:17.289 19:33:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PRTXj9By9f 00:35:17.289 19:33:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PRTXj9By9f 00:35:17.547 19:33:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.547 19:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.114 nvme0n1 00:35:18.114 19:33:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:18.114 19:33:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.114 19:33:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.114 19:33:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.114 19:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.114 19:33:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.371 19:33:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:18.371 19:33:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:18.371 19:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:18.630 19:33:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:18.630 19:33:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:18.630 19:33:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.630 19:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.630 19:33:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.888 19:33:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:18.888 19:33:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:18.888 19:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.888 19:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.888 19:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.888 19:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.888 19:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.146 19:33:29 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:19.146 19:33:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:19.146 19:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.405 19:33:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:19.405 19:33:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:19.405 19:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.663 19:33:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:19.663 19:33:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PRTXj9By9f 00:35:19.663 19:33:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PRTXj9By9f 00:35:19.922 19:33:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xYMj6KfWdm 00:35:19.922 19:33:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xYMj6KfWdm 00:35:20.180 19:33:30 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.180 19:33:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.438 nvme0n1 00:35:20.438 19:33:30 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:20.438 19:33:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:21.005 19:33:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:21.005 "subsystems": [ 00:35:21.005 { 00:35:21.005 "subsystem": "keyring", 00:35:21.005 "config": [ 00:35:21.005 { 00:35:21.005 "method": "keyring_file_add_key", 00:35:21.005 "params": { 00:35:21.005 "name": "key0", 00:35:21.005 "path": "/tmp/tmp.PRTXj9By9f" 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "keyring_file_add_key", 00:35:21.005 "params": { 00:35:21.005 "name": "key1", 00:35:21.005 "path": "/tmp/tmp.xYMj6KfWdm" 00:35:21.005 } 00:35:21.005 } 00:35:21.005 ] 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "subsystem": "iobuf", 00:35:21.005 "config": [ 00:35:21.005 { 00:35:21.005 "method": "iobuf_set_options", 00:35:21.005 "params": { 00:35:21.005 "small_pool_count": 8192, 00:35:21.005 "large_pool_count": 1024, 00:35:21.005 "small_bufsize": 8192, 00:35:21.005 "large_bufsize": 135168, 00:35:21.005 "enable_numa": false 00:35:21.005 } 00:35:21.005 } 00:35:21.005 ] 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "subsystem": "sock", 00:35:21.005 "config": [ 00:35:21.005 { 00:35:21.005 "method": "sock_set_default_impl", 00:35:21.005 "params": { 00:35:21.005 "impl_name": "posix" 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "sock_impl_set_options", 00:35:21.005 "params": { 00:35:21.005 "impl_name": "ssl", 00:35:21.005 "recv_buf_size": 4096, 00:35:21.005 "send_buf_size": 4096, 00:35:21.005 "enable_recv_pipe": true, 00:35:21.005 "enable_quickack": false, 00:35:21.005 "enable_placement_id": 0, 00:35:21.005 "enable_zerocopy_send_server": true, 00:35:21.005 "enable_zerocopy_send_client": false, 00:35:21.005 "zerocopy_threshold": 0, 00:35:21.005 "tls_version": 0, 00:35:21.005 "enable_ktls": false 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "sock_impl_set_options", 00:35:21.005 "params": { 00:35:21.005 "impl_name": "posix", 00:35:21.005 "recv_buf_size": 2097152, 00:35:21.005 "send_buf_size": 2097152, 00:35:21.005 "enable_recv_pipe": true, 00:35:21.005 "enable_quickack": false, 00:35:21.005 "enable_placement_id": 0, 00:35:21.005 "enable_zerocopy_send_server": true, 00:35:21.005 "enable_zerocopy_send_client": false, 00:35:21.005 "zerocopy_threshold": 0, 00:35:21.005 "tls_version": 0, 00:35:21.005 "enable_ktls": false 00:35:21.005 } 00:35:21.005 } 00:35:21.005 ] 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "subsystem": "vmd", 00:35:21.005 "config": [] 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "subsystem": "accel", 00:35:21.005 "config": [ 00:35:21.005 { 00:35:21.005 "method": "accel_set_options", 00:35:21.005 "params": { 00:35:21.005 "small_cache_size": 128, 00:35:21.005 "large_cache_size": 16, 00:35:21.005 "task_count": 2048, 00:35:21.005 "sequence_count": 2048, 00:35:21.005 "buf_count": 2048 00:35:21.005 } 00:35:21.005 } 00:35:21.005 ] 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "subsystem": "bdev", 00:35:21.005 "config": [ 00:35:21.005 { 00:35:21.005 "method": "bdev_set_options", 00:35:21.005 "params": { 00:35:21.005 "bdev_io_pool_size": 65535, 00:35:21.005 "bdev_io_cache_size": 256, 00:35:21.005 "bdev_auto_examine": true, 00:35:21.005 "iobuf_small_cache_size": 128, 00:35:21.005 "iobuf_large_cache_size": 16 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "bdev_raid_set_options", 00:35:21.005 "params": { 00:35:21.005 "process_window_size_kb": 1024, 00:35:21.005 "process_max_bandwidth_mb_sec": 0 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "bdev_iscsi_set_options", 00:35:21.005 "params": { 00:35:21.005 "timeout_sec": 30 00:35:21.005 } 00:35:21.005 }, 00:35:21.005 { 00:35:21.005 "method": "bdev_nvme_set_options", 00:35:21.005 "params": { 00:35:21.005 "action_on_timeout": "none", 00:35:21.005 "timeout_us": 0, 00:35:21.005 "timeout_admin_us": 0, 00:35:21.005 "keep_alive_timeout_ms": 10000, 00:35:21.005 "arbitration_burst": 0, 00:35:21.005 "low_priority_weight": 0, 00:35:21.005 "medium_priority_weight": 0, 00:35:21.005 "high_priority_weight": 0, 00:35:21.005 "nvme_adminq_poll_period_us": 10000, 00:35:21.005 "nvme_ioq_poll_period_us": 0, 00:35:21.005 "io_queue_requests": 512, 00:35:21.005 "delay_cmd_submit": true, 00:35:21.005 "transport_retry_count": 4, 00:35:21.005 "bdev_retry_count": 3, 00:35:21.005 "transport_ack_timeout": 0, 00:35:21.005 "ctrlr_loss_timeout_sec": 0, 00:35:21.006 "reconnect_delay_sec": 0, 00:35:21.006 "fast_io_fail_timeout_sec": 0, 00:35:21.006 "disable_auto_failback": false, 00:35:21.006 "generate_uuids": false, 00:35:21.006 "transport_tos": 0, 00:35:21.006 "nvme_error_stat": false, 00:35:21.006 "rdma_srq_size": 0, 00:35:21.006 "io_path_stat": false, 00:35:21.006 "allow_accel_sequence": false, 00:35:21.006 "rdma_max_cq_size": 0, 00:35:21.006 "rdma_cm_event_timeout_ms": 0, 00:35:21.006 "dhchap_digests": [ 00:35:21.006 "sha256", 00:35:21.006 "sha384", 00:35:21.006 "sha512" 00:35:21.006 ], 00:35:21.006 "dhchap_dhgroups": [ 00:35:21.006 "null", 00:35:21.006 "ffdhe2048", 00:35:21.006 "ffdhe3072", 00:35:21.006 "ffdhe4096", 00:35:21.006 "ffdhe6144", 00:35:21.006 "ffdhe8192" 00:35:21.006 ], 00:35:21.006 "rdma_umr_per_io": false 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_nvme_attach_controller", 00:35:21.006 "params": { 00:35:21.006 "name": "nvme0", 00:35:21.006 "trtype": "TCP", 00:35:21.006 "adrfam": "IPv4", 00:35:21.006 "traddr": "127.0.0.1", 00:35:21.006 "trsvcid": "4420", 00:35:21.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.006 "prchk_reftag": false, 00:35:21.006 "prchk_guard": false, 00:35:21.006 "ctrlr_loss_timeout_sec": 0, 00:35:21.006 "reconnect_delay_sec": 0, 00:35:21.006 "fast_io_fail_timeout_sec": 0, 00:35:21.006 "psk": "key0", 00:35:21.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.006 "hdgst": false, 00:35:21.006 "ddgst": false, 00:35:21.006 "multipath": "multipath" 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_nvme_set_hotplug", 00:35:21.006 "params": { 00:35:21.006 "period_us": 100000, 00:35:21.006 "enable": false 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_wait_for_examine" 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "nbd", 00:35:21.006 "config": [] 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }' 00:35:21.006 19:33:31 keyring_file -- keyring/file.sh@115 -- # killprocess 1310859 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1310859 ']' 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1310859 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310859 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310859' 00:35:21.006 killing process with pid 1310859 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@973 -- # kill 1310859 00:35:21.006 Received shutdown signal, test time was about 1.000000 seconds 00:35:21.006 00:35:21.006 Latency(us) 00:35:21.006 [2024-12-06T18:33:31.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.006 [2024-12-06T18:33:31.583Z] =================================================================================================================== 00:35:21.006 [2024-12-06T18:33:31.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@978 -- # wait 1310859 00:35:21.006 19:33:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=1312447 00:35:21.006 19:33:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1312447 /var/tmp/bperf.sock 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1312447 ']' 00:35:21.006 19:33:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.006 19:33:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.006 19:33:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:21.006 "subsystems": [ 00:35:21.006 { 00:35:21.006 "subsystem": "keyring", 00:35:21.006 "config": [ 00:35:21.006 { 00:35:21.006 "method": "keyring_file_add_key", 00:35:21.006 "params": { 00:35:21.006 "name": "key0", 00:35:21.006 "path": "/tmp/tmp.PRTXj9By9f" 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "keyring_file_add_key", 00:35:21.006 "params": { 00:35:21.006 "name": "key1", 00:35:21.006 "path": "/tmp/tmp.xYMj6KfWdm" 00:35:21.006 } 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "iobuf", 00:35:21.006 "config": [ 00:35:21.006 { 00:35:21.006 "method": "iobuf_set_options", 00:35:21.006 "params": { 00:35:21.006 "small_pool_count": 8192, 00:35:21.006 "large_pool_count": 1024, 00:35:21.006 "small_bufsize": 8192, 00:35:21.006 "large_bufsize": 135168, 00:35:21.006 "enable_numa": false 00:35:21.006 } 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "sock", 00:35:21.006 "config": [ 00:35:21.006 { 00:35:21.006 "method": "sock_set_default_impl", 00:35:21.006 "params": { 00:35:21.006 "impl_name": "posix" 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "sock_impl_set_options", 00:35:21.006 "params": { 00:35:21.006 "impl_name": "ssl", 00:35:21.006 "recv_buf_size": 4096, 00:35:21.006 "send_buf_size": 4096, 00:35:21.006 "enable_recv_pipe": true, 00:35:21.006 "enable_quickack": false, 00:35:21.006 "enable_placement_id": 0, 00:35:21.006 "enable_zerocopy_send_server": true, 00:35:21.006 "enable_zerocopy_send_client": false, 00:35:21.006 "zerocopy_threshold": 0, 00:35:21.006 "tls_version": 0, 00:35:21.006 "enable_ktls": false 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "sock_impl_set_options", 00:35:21.006 "params": { 00:35:21.006 "impl_name": "posix", 00:35:21.006 "recv_buf_size": 2097152, 00:35:21.006 "send_buf_size": 2097152, 00:35:21.006 "enable_recv_pipe": true, 00:35:21.006 "enable_quickack": false, 00:35:21.006 "enable_placement_id": 0, 00:35:21.006 "enable_zerocopy_send_server": true, 00:35:21.006 "enable_zerocopy_send_client": false, 00:35:21.006 "zerocopy_threshold": 0, 00:35:21.006 "tls_version": 0, 00:35:21.006 "enable_ktls": false 00:35:21.006 } 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "vmd", 00:35:21.006 "config": [] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "accel", 00:35:21.006 "config": [ 00:35:21.006 { 00:35:21.006 "method": "accel_set_options", 00:35:21.006 "params": { 00:35:21.006 "small_cache_size": 128, 00:35:21.006 "large_cache_size": 16, 00:35:21.006 "task_count": 2048, 00:35:21.006 "sequence_count": 2048, 00:35:21.006 "buf_count": 2048 00:35:21.006 } 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "subsystem": "bdev", 00:35:21.006 "config": [ 00:35:21.006 { 00:35:21.006 "method": "bdev_set_options", 00:35:21.006 "params": { 00:35:21.006 "bdev_io_pool_size": 65535, 00:35:21.006 "bdev_io_cache_size": 256, 00:35:21.006 "bdev_auto_examine": true, 00:35:21.006 "iobuf_small_cache_size": 128, 00:35:21.006 "iobuf_large_cache_size": 16 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_raid_set_options", 00:35:21.006 "params": { 00:35:21.006 "process_window_size_kb": 1024, 00:35:21.006 "process_max_bandwidth_mb_sec": 0 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_iscsi_set_options", 00:35:21.006 "params": { 00:35:21.006 "timeout_sec": 30 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "method": "bdev_nvme_set_options", 00:35:21.006 "params": { 00:35:21.006 "action_on_timeout": "none", 00:35:21.006 "timeout_us": 0, 00:35:21.006 "timeout_admin_us": 0, 00:35:21.006 "keep_alive_timeout_ms": 10000, 00:35:21.006 "arbitration_burst": 0, 00:35:21.007 "low_priority_weight": 0, 00:35:21.007 "medium_priority_weight": 0, 00:35:21.007 "high_priority_weight": 0, 00:35:21.007 "nvme_adminq_poll_period_us": 10000, 00:35:21.007 "nvme_ioq_poll_period_us": 0, 00:35:21.007 "io_queue_requests": 512, 00:35:21.007 "delay_cmd_submit": true, 00:35:21.007 "transport_retry_count": 4, 00:35:21.007 "bdev_retry_count": 3, 00:35:21.007 "transport_ack_timeout": 0, 00:35:21.007 "ctrlr_loss_timeout_sec": 0, 00:35:21.007 "reconnect_delay_sec": 0, 00:35:21.007 "fast_io_fail_timeout_sec": 0, 00:35:21.007 "disable_auto_failback": false, 00:35:21.007 "generate_uuids": false, 00:35:21.007 "transport_tos": 0, 00:35:21.007 "nvme_error_stat": false, 00:35:21.007 "rdma_srq_size": 0, 00:35:21.007 "io_path_stat": false, 00:35:21.007 "allow_accel_sequence": false, 00:35:21.007 "rdma_max_cq_size": 0, 00:35:21.007 "rdma_cm_event_timeout_ms": 0, 00:35:21.007 "dhchap_digests": [ 00:35:21.007 "sha256", 00:35:21.007 "sha384", 00:35:21.007 "sha512" 00:35:21.007 ], 00:35:21.007 "dhchap_dhgroups": [ 00:35:21.007 "null", 00:35:21.007 "ffdhe2048", 00:35:21.007 "ffdhe3072", 00:35:21.007 "ffdhe4096", 00:35:21.007 "ffdhe6144", 00:35:21.007 "ffdhe8192" 00:35:21.007 ], 00:35:21.007 "rdma_umr_per_io": false 00:35:21.007 } 00:35:21.007 }, 00:35:21.007 { 00:35:21.007 "method": "bdev_nvme_attach_controller", 00:35:21.007 "params": { 00:35:21.007 "name": "nvme0", 00:35:21.007 "trtype": "TCP", 00:35:21.007 "adrfam": "IPv4", 00:35:21.007 "traddr": "127.0.0.1", 00:35:21.007 "trsvcid": "4420", 00:35:21.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.007 "prchk_reftag": false, 00:35:21.007 "prchk_guard": false, 00:35:21.007 "ctrlr_loss_timeout_sec": 0, 00:35:21.007 "reconnect_delay_sec": 0, 00:35:21.007 "fast_io_fail_timeout_sec": 0, 00:35:21.007 "psk": "key0", 00:35:21.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.007 "hdgst": false, 00:35:21.007 "ddgst": false, 00:35:21.007 "multipath": "multipath" 00:35:21.007 } 00:35:21.007 }, 00:35:21.007 { 00:35:21.007 "method": "bdev_nvme_set_hotplug", 00:35:21.007 "params": { 00:35:21.007 "period_us": 100000, 00:35:21.007 "enable": false 00:35:21.007 } 00:35:21.007 }, 00:35:21.007 { 00:35:21.007 "method": "bdev_wait_for_examine" 00:35:21.007 } 00:35:21.007 ] 00:35:21.007 }, 00:35:21.007 { 00:35:21.007 "subsystem": "nbd", 00:35:21.007 "config": [] 00:35:21.007 } 00:35:21.007 ] 00:35:21.007 }' 00:35:21.007 19:33:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.007 19:33:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.007 19:33:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:21.265 [2024-12-06 19:33:31.594898] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:35:21.265 [2024-12-06 19:33:31.595001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312447 ] 00:35:21.265 [2024-12-06 19:33:31.661191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.265 [2024-12-06 19:33:31.721910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.523 [2024-12-06 19:33:31.914808] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:21.523 19:33:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.523 19:33:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:21.523 19:33:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:21.524 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.524 19:33:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:21.782 19:33:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:21.782 19:33:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:21.782 19:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.782 19:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.782 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.782 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.782 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.040 19:33:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:22.040 19:33:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:22.040 19:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:22.040 19:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.040 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.040 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:22.040 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.297 19:33:32 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:22.297 19:33:32 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:22.297 19:33:32 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:22.297 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:22.864 19:33:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:22.864 19:33:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:22.864 19:33:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PRTXj9By9f /tmp/tmp.xYMj6KfWdm 00:35:22.864 19:33:33 keyring_file -- keyring/file.sh@20 -- # killprocess 1312447 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1312447 ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1312447 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312447 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312447' 00:35:22.864 killing process with pid 1312447 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@973 -- # kill 1312447 00:35:22.864 Received shutdown signal, test time was about 1.000000 seconds 00:35:22.864 00:35:22.864 Latency(us) 00:35:22.864 [2024-12-06T18:33:33.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.864 [2024-12-06T18:33:33.441Z] =================================================================================================================== 00:35:22.864 [2024-12-06T18:33:33.441Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@978 -- # wait 1312447 00:35:22.864 19:33:33 keyring_file -- keyring/file.sh@21 -- # killprocess 1310855 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1310855 ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1310855 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310855 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310855' 00:35:22.864 killing process with pid 1310855 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@973 -- # kill 1310855 00:35:22.864 19:33:33 keyring_file -- common/autotest_common.sh@978 -- # wait 1310855 00:35:23.431 00:35:23.431 real 0m14.709s 00:35:23.431 user 0m37.376s 00:35:23.431 sys 0m3.263s 00:35:23.431 19:33:33 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.431 19:33:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:23.431 ************************************ 00:35:23.431 END TEST keyring_file 00:35:23.431 ************************************ 00:35:23.431 19:33:33 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:23.431 19:33:33 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:23.431 19:33:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:23.431 19:33:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.431 19:33:33 -- common/autotest_common.sh@10 -- # set +x 00:35:23.431 ************************************ 00:35:23.431 START TEST keyring_linux 00:35:23.431 ************************************ 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:23.431 Joined session keyring: 649415075 00:35:23.431 * Looking for test storage... 00:35:23.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.431 19:33:33 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:23.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.431 --rc genhtml_branch_coverage=1 00:35:23.431 --rc genhtml_function_coverage=1 00:35:23.431 --rc genhtml_legend=1 00:35:23.431 --rc geninfo_all_blocks=1 00:35:23.431 --rc geninfo_unexecuted_blocks=1 00:35:23.431 00:35:23.431 ' 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:23.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.431 --rc genhtml_branch_coverage=1 00:35:23.431 --rc genhtml_function_coverage=1 00:35:23.431 --rc genhtml_legend=1 00:35:23.431 --rc geninfo_all_blocks=1 00:35:23.431 --rc geninfo_unexecuted_blocks=1 00:35:23.431 00:35:23.431 ' 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:23.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.431 --rc genhtml_branch_coverage=1 00:35:23.431 --rc genhtml_function_coverage=1 00:35:23.431 --rc genhtml_legend=1 00:35:23.431 --rc geninfo_all_blocks=1 00:35:23.431 --rc geninfo_unexecuted_blocks=1 00:35:23.431 00:35:23.431 ' 00:35:23.431 19:33:33 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:23.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.431 --rc genhtml_branch_coverage=1 00:35:23.431 --rc genhtml_function_coverage=1 00:35:23.431 --rc genhtml_legend=1 00:35:23.431 --rc geninfo_all_blocks=1 00:35:23.431 --rc geninfo_unexecuted_blocks=1 00:35:23.431 00:35:23.431 ' 00:35:23.431 19:33:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:23.431 19:33:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.431 19:33:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.432 19:33:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.432 19:33:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.432 19:33:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.432 19:33:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.432 19:33:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.432 19:33:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.432 19:33:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.432 19:33:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:23.432 19:33:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:23.432 19:33:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:23.432 19:33:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:23.432 19:33:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:23.690 19:33:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:23.690 19:33:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:23.690 /tmp/:spdk-test:key0 00:35:23.690 19:33:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:23.691 19:33:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:23.691 19:33:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:23.691 /tmp/:spdk-test:key1 00:35:23.691 19:33:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1312811 00:35:23.691 19:33:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:23.691 19:33:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1312811 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1312811 ']' 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.691 19:33:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:23.691 [2024-12-06 19:33:34.134438] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:35:23.691 [2024-12-06 19:33:34.134539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312811 ] 00:35:23.691 [2024-12-06 19:33:34.202734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.691 [2024-12-06 19:33:34.256334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.949 19:33:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.949 19:33:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:23.949 19:33:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:23.949 19:33:34 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.949 19:33:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.207 [2024-12-06 19:33:34.528117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.207 null0 00:35:24.207 [2024-12-06 19:33:34.560127] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:24.207 [2024-12-06 19:33:34.560605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.207 19:33:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:24.207 980663601 00:35:24.207 19:33:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:24.207 298540696 00:35:24.207 19:33:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1312823 00:35:24.207 19:33:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:24.207 19:33:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1312823 /var/tmp/bperf.sock 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1312823 ']' 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.207 19:33:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.207 [2024-12-06 19:33:34.627107] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:35:24.207 [2024-12-06 19:33:34.627169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312823 ] 00:35:24.207 [2024-12-06 19:33:34.689907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.207 [2024-12-06 19:33:34.748048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.464 19:33:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.464 19:33:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:24.464 19:33:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:24.464 19:33:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:24.720 19:33:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:24.720 19:33:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:24.977 19:33:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:24.977 19:33:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:25.235 [2024-12-06 19:33:35.739020] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:25.235 nvme0n1 00:35:25.493 19:33:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:25.493 19:33:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:25.493 19:33:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:25.493 19:33:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:25.493 19:33:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.493 19:33:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:25.751 19:33:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:25.751 19:33:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:25.751 19:33:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:25.751 19:33:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:25.751 19:33:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:25.751 19:33:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:25.751 19:33:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:26.007 19:33:36 keyring_linux -- keyring/linux.sh@25 -- # sn=980663601 00:35:26.007 19:33:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:26.007 19:33:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:26.007 19:33:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 980663601 == \9\8\0\6\6\3\6\0\1 ]] 00:35:26.007 19:33:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 980663601 00:35:26.008 19:33:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:26.008 19:33:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.008 Running I/O for 1 seconds... 00:35:27.197 11468.00 IOPS, 44.80 MiB/s 00:35:27.197 Latency(us) 00:35:27.197 [2024-12-06T18:33:37.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.197 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:27.197 nvme0n1 : 1.01 11463.73 44.78 0.00 0.00 11090.71 3276.80 14563.56 00:35:27.197 [2024-12-06T18:33:37.774Z] =================================================================================================================== 00:35:27.197 [2024-12-06T18:33:37.774Z] Total : 11463.73 44.78 0.00 0.00 11090.71 3276.80 14563.56 00:35:27.197 { 00:35:27.197 "results": [ 00:35:27.197 { 00:35:27.197 "job": "nvme0n1", 00:35:27.197 "core_mask": "0x2", 00:35:27.197 "workload": "randread", 00:35:27.197 "status": "finished", 00:35:27.197 "queue_depth": 128, 00:35:27.197 "io_size": 4096, 00:35:27.197 "runtime": 1.011625, 00:35:27.197 "iops": 11463.734091189917, 00:35:27.197 "mibps": 44.780211293710614, 00:35:27.197 "io_failed": 0, 00:35:27.197 "io_timeout": 0, 00:35:27.197 "avg_latency_us": 11090.714132326686, 00:35:27.197 "min_latency_us": 3276.8, 00:35:27.197 "max_latency_us": 14563.555555555555 00:35:27.197 } 00:35:27.197 ], 00:35:27.197 "core_count": 1 00:35:27.197 } 00:35:27.197 19:33:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:27.197 19:33:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:27.453 19:33:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:27.453 19:33:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:27.453 19:33:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:27.453 19:33:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:27.453 19:33:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.453 19:33:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:27.709 19:33:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:27.709 19:33:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:27.709 19:33:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:27.709 19:33:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.709 19:33:38 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:27.709 19:33:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:27.967 [2024-12-06 19:33:38.345858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:27.967 [2024-12-06 19:33:38.346470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5daf20 (107): Transport endpoint is not connected 00:35:27.967 [2024-12-06 19:33:38.347461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5daf20 (9): Bad file descriptor 00:35:27.967 [2024-12-06 19:33:38.348460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:27.967 [2024-12-06 19:33:38.348479] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:27.967 [2024-12-06 19:33:38.348500] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:27.967 [2024-12-06 19:33:38.348521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:27.967 request: 00:35:27.967 { 00:35:27.967 "name": "nvme0", 00:35:27.967 "trtype": "tcp", 00:35:27.967 "traddr": "127.0.0.1", 00:35:27.967 "adrfam": "ipv4", 00:35:27.967 "trsvcid": "4420", 00:35:27.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.967 "prchk_reftag": false, 00:35:27.967 "prchk_guard": false, 00:35:27.967 "hdgst": false, 00:35:27.967 "ddgst": false, 00:35:27.967 "psk": ":spdk-test:key1", 00:35:27.967 "allow_unrecognized_csi": false, 00:35:27.967 "method": "bdev_nvme_attach_controller", 00:35:27.967 "req_id": 1 00:35:27.967 } 00:35:27.967 Got JSON-RPC error response 00:35:27.967 response: 00:35:27.967 { 00:35:27.967 "code": -5, 00:35:27.967 "message": "Input/output error" 00:35:27.967 } 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@33 -- # sn=980663601 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 980663601 00:35:27.967 1 links removed 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@33 -- # sn=298540696 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 298540696 00:35:27.967 1 links removed 00:35:27.967 19:33:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1312823 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1312823 ']' 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1312823 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312823 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312823' 00:35:27.967 killing process with pid 1312823 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 1312823 00:35:27.967 Received shutdown signal, test time was about 1.000000 seconds 00:35:27.967 00:35:27.967 Latency(us) 00:35:27.967 [2024-12-06T18:33:38.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.967 [2024-12-06T18:33:38.544Z] =================================================================================================================== 00:35:27.967 [2024-12-06T18:33:38.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.967 19:33:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 1312823 00:35:28.230 19:33:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1312811 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1312811 ']' 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1312811 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312811 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312811' 00:35:28.230 killing process with pid 1312811 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 1312811 00:35:28.230 19:33:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 1312811 00:35:28.796 00:35:28.796 real 0m5.247s 00:35:28.796 user 0m10.405s 00:35:28.796 sys 0m1.602s 00:35:28.796 19:33:39 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.796 19:33:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:28.796 ************************************ 00:35:28.796 END TEST keyring_linux 00:35:28.796 ************************************ 00:35:28.796 19:33:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:28.796 19:33:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:28.796 19:33:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:28.796 19:33:39 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:28.796 19:33:39 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:28.796 19:33:39 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:28.796 19:33:39 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:28.796 19:33:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.796 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:35:28.796 19:33:39 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:28.796 19:33:39 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:28.796 19:33:39 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:28.796 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:35:30.730 INFO: APP EXITING 00:35:30.730 INFO: killing all VMs 00:35:30.730 INFO: killing vhost app 00:35:30.730 INFO: EXIT DONE 00:35:31.690 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:35:31.690 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:31.690 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:31.690 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:31.690 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:31.690 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:31.690 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:31.690 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:31.690 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:31.690 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:31.949 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:31.949 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:31.949 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:31.949 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:31.949 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:31.949 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:31.949 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:33.327 Cleaning 00:35:33.327 Removing: /var/run/dpdk/spdk0/config 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:33.327 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.327 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.327 Removing: /var/run/dpdk/spdk1/config 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:33.327 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:33.327 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:33.327 Removing: /var/run/dpdk/spdk2/config 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:33.327 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:33.327 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:33.327 Removing: /var/run/dpdk/spdk3/config 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:33.327 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:33.327 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:33.327 Removing: /var/run/dpdk/spdk4/config 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:33.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:33.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:33.327 Removing: /dev/shm/bdev_svc_trace.1 00:35:33.327 Removing: /dev/shm/nvmf_trace.0 00:35:33.327 Removing: /dev/shm/spdk_tgt_trace.pid990744 00:35:33.327 Removing: /var/run/dpdk/spdk0 00:35:33.327 Removing: /var/run/dpdk/spdk1 00:35:33.327 Removing: /var/run/dpdk/spdk2 00:35:33.327 Removing: /var/run/dpdk/spdk3 00:35:33.327 Removing: /var/run/dpdk/spdk4 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000129 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000257 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000563 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000691 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000861 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1000899 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1001147 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1001166 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1001645 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1001815 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1002019 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1004139 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1007394 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1014540 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1014953 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1017470 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1017745 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1020277 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1024022 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1026195 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1032622 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1037975 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1039178 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1039963 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1050971 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1053396 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1080658 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1084459 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1088307 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1092578 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1092695 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1093241 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1093895 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1094551 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1094938 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1094955 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1095106 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1095239 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1095241 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1095897 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1096549 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1097098 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1097494 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1097613 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1097758 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1098711 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1099504 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1104841 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1132897 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1136448 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1137625 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1138863 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1139029 00:35:33.327 Removing: /var/run/dpdk/spdk_pid1139130 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1139269 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1139832 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1141151 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1141888 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1142325 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1143941 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1144360 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1144806 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1147193 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1150590 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1150591 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1150592 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1152819 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1157675 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1160452 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1164353 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1165286 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1166884 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1167881 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1170746 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1173297 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1175587 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1179819 00:35:33.328 Removing: /var/run/dpdk/spdk_pid1179938 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1182728 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1182983 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1183123 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1183392 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1183397 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1186166 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1186616 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1189292 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1191170 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1194615 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1198020 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1205128 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1209607 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1209615 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1222096 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1222516 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1222926 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1223338 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1223913 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1224444 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1224858 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1225263 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1227765 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1227910 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1231839 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1231899 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1235266 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1237876 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1245416 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1245822 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1248336 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1248606 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1251112 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1254804 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1256971 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1263337 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1268543 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1269729 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1270392 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1281192 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1283487 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1285453 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1290512 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1290517 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1293543 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1294941 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1296340 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1297092 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1298605 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1299369 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1304762 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1305159 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1305551 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1307213 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1307613 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1308392 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1310855 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1310859 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1312447 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1312811 00:35:33.587 Removing: /var/run/dpdk/spdk_pid1312823 00:35:33.587 Removing: /var/run/dpdk/spdk_pid989055 00:35:33.587 Removing: /var/run/dpdk/spdk_pid989802 00:35:33.587 Removing: /var/run/dpdk/spdk_pid990744 00:35:33.587 Removing: /var/run/dpdk/spdk_pid991100 00:35:33.587 Removing: /var/run/dpdk/spdk_pid991764 00:35:33.587 Removing: /var/run/dpdk/spdk_pid991904 00:35:33.587 Removing: /var/run/dpdk/spdk_pid992622 00:35:33.587 Removing: /var/run/dpdk/spdk_pid992742 00:35:33.587 Removing: /var/run/dpdk/spdk_pid993008 00:35:33.587 Removing: /var/run/dpdk/spdk_pid994210 00:35:33.587 Removing: /var/run/dpdk/spdk_pid995134 00:35:33.587 Removing: /var/run/dpdk/spdk_pid995452 00:35:33.587 Removing: /var/run/dpdk/spdk_pid995646 00:35:33.587 Removing: /var/run/dpdk/spdk_pid995865 00:35:33.587 Removing: /var/run/dpdk/spdk_pid996100 00:35:33.587 Removing: /var/run/dpdk/spdk_pid996338 00:35:33.587 Removing: /var/run/dpdk/spdk_pid996492 00:35:33.587 Removing: /var/run/dpdk/spdk_pid996678 00:35:33.587 Removing: /var/run/dpdk/spdk_pid996882 00:35:33.587 Removing: /var/run/dpdk/spdk_pid999364 00:35:33.587 Removing: /var/run/dpdk/spdk_pid999528 00:35:33.587 Removing: /var/run/dpdk/spdk_pid999698 00:35:33.587 Removing: /var/run/dpdk/spdk_pid999822 00:35:33.587 Clean 00:35:33.587 19:33:44 -- common/autotest_common.sh@1453 -- # return 0 00:35:33.587 19:33:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:33.587 19:33:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.587 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:35:33.587 19:33:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:33.587 19:33:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.587 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:35:33.587 19:33:44 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:33.587 19:33:44 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:33.587 19:33:44 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:33.845 19:33:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:33.845 19:33:44 -- spdk/autotest.sh@398 -- # hostname 00:35:33.845 19:33:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:33.845 geninfo: WARNING: invalid characters removed from testname! 00:36:05.914 19:34:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:09.231 19:34:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:12.511 19:34:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:15.040 19:34:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:18.320 19:34:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:21.602 19:34:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:24.236 19:34:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:24.236 19:34:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:24.236 19:34:34 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:24.236 19:34:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:24.236 19:34:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:24.236 19:34:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:24.236 + [[ -n 918449 ]] 00:36:24.236 + sudo kill 918449 00:36:24.247 [Pipeline] } 00:36:24.263 [Pipeline] // stage 00:36:24.269 [Pipeline] } 00:36:24.285 [Pipeline] // timeout 00:36:24.291 [Pipeline] } 00:36:24.306 [Pipeline] // catchError 00:36:24.311 [Pipeline] } 00:36:24.327 [Pipeline] // wrap 00:36:24.333 [Pipeline] } 00:36:24.347 [Pipeline] // catchError 00:36:24.359 [Pipeline] stage 00:36:24.362 [Pipeline] { (Epilogue) 00:36:24.377 [Pipeline] catchError 00:36:24.379 [Pipeline] { 00:36:24.393 [Pipeline] echo 00:36:24.395 Cleanup processes 00:36:24.401 [Pipeline] sh 00:36:24.688 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.688 1323499 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.700 [Pipeline] sh 00:36:24.982 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.982 ++ grep -v 'sudo pgrep' 00:36:24.982 ++ awk '{print $1}' 00:36:24.982 + sudo kill -9 00:36:24.982 + true 00:36:24.994 [Pipeline] sh 00:36:25.278 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:35.254 [Pipeline] sh 00:36:35.540 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:35.540 Artifacts sizes are good 00:36:35.554 [Pipeline] archiveArtifacts 00:36:35.560 Archiving artifacts 00:36:35.703 [Pipeline] sh 00:36:35.988 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:36.005 [Pipeline] cleanWs 00:36:36.015 [WS-CLEANUP] Deleting project workspace... 00:36:36.015 [WS-CLEANUP] Deferred wipeout is used... 00:36:36.022 [WS-CLEANUP] done 00:36:36.024 [Pipeline] } 00:36:36.041 [Pipeline] // catchError 00:36:36.054 [Pipeline] sh 00:36:36.336 + logger -p user.info -t JENKINS-CI 00:36:36.345 [Pipeline] } 00:36:36.359 [Pipeline] // stage 00:36:36.364 [Pipeline] } 00:36:36.378 [Pipeline] // node 00:36:36.383 [Pipeline] End of Pipeline 00:36:36.416 Finished: SUCCESS